Test Report: QEMU_macOS 20045

                    
                      70ee1ceb4b2f7849aa4717a6092bbfa282d9029b:2024-12-04:37344
                    
                

Test fail (156/258)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 24.58
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 10.05
27 TestAddons/Setup 10.11
28 TestCertOptions 10.24
29 TestCertExpiration 195.27
30 TestDockerFlags 10.19
31 TestForceSystemdFlag 10.23
32 TestForceSystemdEnv 10.99
38 TestErrorSpam/setup 9.81
47 TestFunctional/serial/StartWithProxy 10.03
49 TestFunctional/serial/SoftStart 5.27
50 TestFunctional/serial/KubeContext 0.06
51 TestFunctional/serial/KubectlGetPods 0.06
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.05
59 TestFunctional/serial/CacheCmd/cache/cache_reload 0.18
61 TestFunctional/serial/MinikubeKubectlCmd 0.75
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.19
63 TestFunctional/serial/ExtraConfig 5.27
64 TestFunctional/serial/ComponentHealth 0.06
65 TestFunctional/serial/LogsCmd 0.08
66 TestFunctional/serial/LogsFileCmd 0.08
67 TestFunctional/serial/InvalidService 0.03
70 TestFunctional/parallel/DashboardCmd 0.21
73 TestFunctional/parallel/StatusCmd 0.14
77 TestFunctional/parallel/ServiceCmdConnect 0.14
79 TestFunctional/parallel/PersistentVolumeClaim 0.04
81 TestFunctional/parallel/SSHCmd 0.13
82 TestFunctional/parallel/CpCmd 0.29
84 TestFunctional/parallel/FileSync 0.08
85 TestFunctional/parallel/CertSync 0.31
89 TestFunctional/parallel/NodeLabels 0.06
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.04
95 TestFunctional/parallel/Version/components 0.05
96 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
97 TestFunctional/parallel/ImageCommands/ImageListTable 0.04
98 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
99 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
100 TestFunctional/parallel/ImageCommands/ImageBuild 0.13
102 TestFunctional/parallel/DockerEnv/bash 0.05
103 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
104 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
105 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
106 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
107 TestFunctional/parallel/ServiceCmd/List 0.05
108 TestFunctional/parallel/ServiceCmd/JSONOutput 0.05
109 TestFunctional/parallel/ServiceCmd/HTTPS 0.05
110 TestFunctional/parallel/ServiceCmd/Format 0.05
111 TestFunctional/parallel/ServiceCmd/URL 0.05
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.08
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 107.1
118 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.32
119 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.29
120 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.17
121 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
123 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.08
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.07
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 30.05
141 TestMultiControlPlane/serial/StartCluster 10.02
142 TestMultiControlPlane/serial/DeployApp 104.17
143 TestMultiControlPlane/serial/PingHostFromPods 0.1
144 TestMultiControlPlane/serial/AddWorkerNode 0.08
145 TestMultiControlPlane/serial/NodeLabels 0.06
146 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.09
147 TestMultiControlPlane/serial/CopyFile 0.07
148 TestMultiControlPlane/serial/StopSecondaryNode 0.13
149 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.09
150 TestMultiControlPlane/serial/RestartSecondaryNode 52.75
151 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.09
152 TestMultiControlPlane/serial/RestartClusterKeepsNodes 9.19
153 TestMultiControlPlane/serial/DeleteSecondaryNode 0.11
154 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.09
155 TestMultiControlPlane/serial/StopCluster 3.57
156 TestMultiControlPlane/serial/RestartCluster 5.26
157 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.09
158 TestMultiControlPlane/serial/AddSecondaryNode 0.08
159 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.09
162 TestImageBuild/serial/Setup 9.91
165 TestJSONOutput/start/Command 9.89
171 TestJSONOutput/pause/Command 0.09
177 TestJSONOutput/unpause/Command 0.05
194 TestMinikubeProfile 10.28
197 TestMountStart/serial/StartWithMountFirst 9.99
200 TestMultiNode/serial/FreshStart2Nodes 9.99
201 TestMultiNode/serial/DeployApp2Nodes 102.09
202 TestMultiNode/serial/PingHostFrom2Pods 0.1
203 TestMultiNode/serial/AddNode 0.08
204 TestMultiNode/serial/MultiNodeLabels 0.06
205 TestMultiNode/serial/ProfileList 0.09
206 TestMultiNode/serial/CopyFile 0.07
207 TestMultiNode/serial/StopNode 0.15
208 TestMultiNode/serial/StartAfterStop 42.9
209 TestMultiNode/serial/RestartKeepsNodes 8.91
210 TestMultiNode/serial/DeleteNode 0.12
211 TestMultiNode/serial/StopMultiNode 3.31
212 TestMultiNode/serial/RestartMultiNode 5.27
213 TestMultiNode/serial/ValidateNameConflict 20.31
217 TestPreload 10.06
219 TestScheduledStopUnix 10.09
220 TestSkaffold 12.55
223 TestRunningBinaryUpgrade 590.27
225 TestKubernetesUpgrade 17.18
238 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 0.98
239 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.14
241 TestStoppedBinaryUpgrade/Upgrade 574.74
243 TestPause/serial/Start 9.98
253 TestNoKubernetes/serial/StartWithK8s 9.9
254 TestNoKubernetes/serial/StartWithStopK8s 5.29
255 TestNoKubernetes/serial/Start 5.32
259 TestNoKubernetes/serial/StartNoArgs 5.36
261 TestNetworkPlugins/group/auto/Start 9.9
262 TestNetworkPlugins/group/kindnet/Start 9.95
263 TestNetworkPlugins/group/calico/Start 9.9
264 TestNetworkPlugins/group/custom-flannel/Start 9.99
265 TestNetworkPlugins/group/false/Start 9.84
266 TestNetworkPlugins/group/enable-default-cni/Start 10.05
267 TestNetworkPlugins/group/flannel/Start 10.11
268 TestNetworkPlugins/group/bridge/Start 9.93
269 TestNetworkPlugins/group/kubenet/Start 9.75
271 TestStartStop/group/old-k8s-version/serial/FirstStart 10.1
273 TestStartStop/group/old-k8s-version/serial/DeployApp 0.1
274 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.13
277 TestStartStop/group/old-k8s-version/serial/SecondStart 5.24
278 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.04
279 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
280 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.08
281 TestStartStop/group/old-k8s-version/serial/Pause 0.11
283 TestStartStop/group/no-preload/serial/FirstStart 9.88
284 TestStartStop/group/no-preload/serial/DeployApp 0.1
285 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
288 TestStartStop/group/no-preload/serial/SecondStart 5.82
290 TestStartStop/group/embed-certs/serial/FirstStart 9.94
291 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.04
292 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
293 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.08
294 TestStartStop/group/no-preload/serial/Pause 0.12
296 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 10.09
297 TestStartStop/group/embed-certs/serial/DeployApp 0.1
298 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
300 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
301 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.12
304 TestStartStop/group/embed-certs/serial/SecondStart 5.27
306 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.41
307 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.04
308 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
309 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.08
310 TestStartStop/group/embed-certs/serial/Pause 0.11
312 TestStartStop/group/newest-cni/serial/FirstStart 10.35
313 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
314 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
315 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.08
316 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.11
321 TestStartStop/group/newest-cni/serial/SecondStart 5.27
324 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.08
325 TestStartStop/group/newest-cni/serial/Pause 0.12
x
+
TestDownloadOnly/v1.20.0/json-events (24.58s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-447000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-447000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (24.581460125s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"49bf0c21-165e-404e-9198-3d6026ab345e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-447000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9a4ef30a-cca4-4634-9d62-9291641d16fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20045"}}
	{"specversion":"1.0","id":"e5f84883-a8c0-4904-9db4-8955b53f158b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig"}}
	{"specversion":"1.0","id":"e69fb7e1-3f0c-4078-a498-0790e76dfdd8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"ef97b3f9-312a-4fe1-85bf-56a98e7c4e3e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a6722659-21b4-44b2-8882-db3acbb044ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube"}}
	{"specversion":"1.0","id":"44651e9c-1aca-4d6b-9138-433c727a68a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"6accfe00-fd1b-4367-8a67-08aa3c5b2490","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"0786de76-2a1c-4fc7-8694-3d14a0681995","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"75db282b-7772-40d6-a620-eb0fa5c7b029","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"8523044f-ea65-485a-b72b-247105173429","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-447000\" primary control-plane node in \"download-only-447000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"b9456b02-7007-4200-b686-4698d19b60e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"513e9c83-9a9f-43de-b6a5-74e8df58cb94","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/20045-6982/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x109434320 0x109434320 0x109434320 0x109434320 0x109434320 0x109434320 0x109434320] Decompressors:map[bz2:0x14000797ec0 gz:0x14000797ec8 tar:0x14000797e30 tar.bz2:0x14000797e50 tar.gz:0x14000797e60 tar.xz:0x14000797e70 tar.zst:0x14000797eb0 tbz2:0x14000797e50 tgz:0x14
000797e60 txz:0x14000797e70 tzst:0x14000797eb0 xz:0x14000797f10 zip:0x14000797f20 zst:0x14000797f18] Getters:map[file:0x14000bca800 http:0x140009080a0 https:0x140009080f0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"89028678-b5ca-43f4-93d4-f0a1de652296","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:20:55.041173    7496 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:20:55.041346    7496 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:20:55.041349    7496 out.go:358] Setting ErrFile to fd 2...
	I1204 15:20:55.041352    7496 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:20:55.041486    7496 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	W1204 15:20:55.041581    7496 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/20045-6982/.minikube/config/config.json: open /Users/jenkins/minikube-integration/20045-6982/.minikube/config/config.json: no such file or directory
	I1204 15:20:55.042951    7496 out.go:352] Setting JSON to true
	I1204 15:20:55.061638    7496 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4825,"bootTime":1733349630,"procs":550,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 15:20:55.061711    7496 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:20:55.066885    7496 out.go:97] [download-only-447000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 15:20:55.067048    7496 notify.go:220] Checking for updates...
	W1204 15:20:55.067058    7496 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball: no such file or directory
	I1204 15:20:55.070874    7496 out.go:169] MINIKUBE_LOCATION=20045
	I1204 15:20:55.073889    7496 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	I1204 15:20:55.078880    7496 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 15:20:55.081916    7496 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:20:55.085896    7496 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	W1204 15:20:55.091838    7496 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1204 15:20:55.092100    7496 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:20:55.095838    7496 out.go:97] Using the qemu2 driver based on user configuration
	I1204 15:20:55.095856    7496 start.go:297] selected driver: qemu2
	I1204 15:20:55.095871    7496 start.go:901] validating driver "qemu2" against <nil>
	I1204 15:20:55.095956    7496 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 15:20:55.098788    7496 out.go:169] Automatically selected the socket_vmnet network
	I1204 15:20:55.105459    7496 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1204 15:20:55.105562    7496 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1204 15:20:55.105598    7496 cni.go:84] Creating CNI manager for ""
	I1204 15:20:55.105641    7496 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1204 15:20:55.105702    7496 start.go:340] cluster config:
	{Name:download-only-447000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-447000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:20:55.110468    7496 iso.go:125] acquiring lock: {Name:mk798ab7885b095925d55c843b0600e7ea181045 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:20:55.113811    7496 out.go:97] Downloading VM boot image ...
	I1204 15:20:55.113827    7496 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso
	I1204 15:21:04.342769    7496 out.go:97] Starting "download-only-447000" primary control-plane node in "download-only-447000" cluster
	I1204 15:21:04.342789    7496 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1204 15:21:04.404970    7496 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1204 15:21:04.404981    7496 cache.go:56] Caching tarball of preloaded images
	I1204 15:21:04.405207    7496 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1204 15:21:04.411345    7496 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1204 15:21:04.411351    7496 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1204 15:21:04.490629    7496 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1204 15:21:18.339509    7496 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1204 15:21:18.339702    7496 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1204 15:21:19.034444    7496 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1204 15:21:19.034655    7496 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/download-only-447000/config.json ...
	I1204 15:21:19.034674    7496 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/download-only-447000/config.json: {Name:mk88581c4ef10dbfffc45249df0539ce117cf9df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:21:19.034948    7496 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1204 15:21:19.035205    7496 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I1204 15:21:19.545370    7496 out.go:193] 
	W1204 15:21:19.550333    7496 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/20045-6982/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x109434320 0x109434320 0x109434320 0x109434320 0x109434320 0x109434320 0x109434320] Decompressors:map[bz2:0x14000797ec0 gz:0x14000797ec8 tar:0x14000797e30 tar.bz2:0x14000797e50 tar.gz:0x14000797e60 tar.xz:0x14000797e70 tar.zst:0x14000797eb0 tbz2:0x14000797e50 tgz:0x14000797e60 txz:0x14000797e70 tzst:0x14000797eb0 xz:0x14000797f10 zip:0x14000797f20 zst:0x14000797f18] Getters:map[file:0x14000bca800 http:0x140009080a0 https:0x140009080f0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1204 15:21:19.550360    7496 out_reason.go:110] 
	W1204 15:21:19.558354    7496 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 15:21:19.562171    7496 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-447000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (24.58s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/20045-6982/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.05s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-547000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-547000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.891404291s)

                                                
                                                
-- stdout --
	* [offline-docker-547000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-547000" primary control-plane node in "offline-docker-547000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-547000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:32:50.539286    9562 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:32:50.539484    9562 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:32:50.539487    9562 out.go:358] Setting ErrFile to fd 2...
	I1204 15:32:50.539490    9562 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:32:50.539640    9562 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:32:50.540987    9562 out.go:352] Setting JSON to false
	I1204 15:32:50.561064    9562 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5540,"bootTime":1733349630,"procs":546,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 15:32:50.561150    9562 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:32:50.566459    9562 out.go:177] * [offline-docker-547000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 15:32:50.572455    9562 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 15:32:50.572476    9562 notify.go:220] Checking for updates...
	I1204 15:32:50.580441    9562 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	I1204 15:32:50.583419    9562 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 15:32:50.587439    9562 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:32:50.590480    9562 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	I1204 15:32:50.593423    9562 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 15:32:50.596867    9562 config.go:182] Loaded profile config "multinode-093000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:32:50.596935    9562 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:32:50.601391    9562 out.go:177] * Using the qemu2 driver based on user configuration
	I1204 15:32:50.608462    9562 start.go:297] selected driver: qemu2
	I1204 15:32:50.608469    9562 start.go:901] validating driver "qemu2" against <nil>
	I1204 15:32:50.608476    9562 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 15:32:50.610775    9562 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 15:32:50.613427    9562 out.go:177] * Automatically selected the socket_vmnet network
	I1204 15:32:50.616521    9562 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 15:32:50.616539    9562 cni.go:84] Creating CNI manager for ""
	I1204 15:32:50.616562    9562 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 15:32:50.616571    9562 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1204 15:32:50.616606    9562 start.go:340] cluster config:
	{Name:offline-docker-547000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:offline-docker-547000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:32:50.621313    9562 iso.go:125] acquiring lock: {Name:mk798ab7885b095925d55c843b0600e7ea181045 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:32:50.628433    9562 out.go:177] * Starting "offline-docker-547000" primary control-plane node in "offline-docker-547000" cluster
	I1204 15:32:50.632428    9562 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:32:50.632465    9562 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 15:32:50.632483    9562 cache.go:56] Caching tarball of preloaded images
	I1204 15:32:50.632578    9562 preload.go:172] Found /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 15:32:50.632584    9562 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 15:32:50.632653    9562 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/offline-docker-547000/config.json ...
	I1204 15:32:50.632664    9562 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/offline-docker-547000/config.json: {Name:mk10bd43c3c3af28747adcc1ea42e9db0d0ab22a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:32:50.633031    9562 start.go:360] acquireMachinesLock for offline-docker-547000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:32:50.633077    9562 start.go:364] duration metric: took 37.542µs to acquireMachinesLock for "offline-docker-547000"
	I1204 15:32:50.633088    9562 start.go:93] Provisioning new machine with config: &{Name:offline-docker-547000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.2 ClusterName:offline-docker-547000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:32:50.633116    9562 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 15:32:50.637441    9562 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1204 15:32:50.653751    9562 start.go:159] libmachine.API.Create for "offline-docker-547000" (driver="qemu2")
	I1204 15:32:50.653799    9562 client.go:168] LocalClient.Create starting
	I1204 15:32:50.653872    9562 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem
	I1204 15:32:50.653906    9562 main.go:141] libmachine: Decoding PEM data...
	I1204 15:32:50.653918    9562 main.go:141] libmachine: Parsing certificate...
	I1204 15:32:50.653963    9562 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem
	I1204 15:32:50.653995    9562 main.go:141] libmachine: Decoding PEM data...
	I1204 15:32:50.654003    9562 main.go:141] libmachine: Parsing certificate...
	I1204 15:32:50.654449    9562 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-6982/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 15:32:50.816953    9562 main.go:141] libmachine: Creating SSH key...
	I1204 15:32:50.917613    9562 main.go:141] libmachine: Creating Disk image...
	I1204 15:32:50.917623    9562 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 15:32:50.917841    9562 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/offline-docker-547000/disk.qcow2.raw /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/offline-docker-547000/disk.qcow2
	I1204 15:32:50.929134    9562 main.go:141] libmachine: STDOUT: 
	I1204 15:32:50.929157    9562 main.go:141] libmachine: STDERR: 
	I1204 15:32:50.929266    9562 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/offline-docker-547000/disk.qcow2 +20000M
	I1204 15:32:50.938713    9562 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 15:32:50.938731    9562 main.go:141] libmachine: STDERR: 
	I1204 15:32:50.938746    9562 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/offline-docker-547000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/offline-docker-547000/disk.qcow2
	I1204 15:32:50.938751    9562 main.go:141] libmachine: Starting QEMU VM...
	I1204 15:32:50.938769    9562 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:32:50.938797    9562 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/offline-docker-547000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/offline-docker-547000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/offline-docker-547000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:e5:29:88:06:76 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/offline-docker-547000/disk.qcow2
	I1204 15:32:50.940776    9562 main.go:141] libmachine: STDOUT: 
	I1204 15:32:50.940789    9562 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:32:50.940808    9562 client.go:171] duration metric: took 286.999375ms to LocalClient.Create
	I1204 15:32:52.941150    9562 start.go:128] duration metric: took 2.3080075s to createHost
	I1204 15:32:52.941165    9562 start.go:83] releasing machines lock for "offline-docker-547000", held for 2.308062709s
	W1204 15:32:52.941181    9562 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:32:52.948569    9562 out.go:177] * Deleting "offline-docker-547000" in qemu2 ...
	W1204 15:32:52.959835    9562 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:32:52.959847    9562 start.go:729] Will try again in 5 seconds ...
	I1204 15:32:57.962078    9562 start.go:360] acquireMachinesLock for offline-docker-547000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:32:57.962749    9562 start.go:364] duration metric: took 521.291µs to acquireMachinesLock for "offline-docker-547000"
	I1204 15:32:57.962902    9562 start.go:93] Provisioning new machine with config: &{Name:offline-docker-547000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.2 ClusterName:offline-docker-547000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:32:57.963240    9562 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 15:32:57.970907    9562 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1204 15:32:58.021862    9562 start.go:159] libmachine.API.Create for "offline-docker-547000" (driver="qemu2")
	I1204 15:32:58.021917    9562 client.go:168] LocalClient.Create starting
	I1204 15:32:58.022049    9562 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem
	I1204 15:32:58.022140    9562 main.go:141] libmachine: Decoding PEM data...
	I1204 15:32:58.022158    9562 main.go:141] libmachine: Parsing certificate...
	I1204 15:32:58.022232    9562 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem
	I1204 15:32:58.022289    9562 main.go:141] libmachine: Decoding PEM data...
	I1204 15:32:58.022305    9562 main.go:141] libmachine: Parsing certificate...
	I1204 15:32:58.022980    9562 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-6982/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 15:32:58.194901    9562 main.go:141] libmachine: Creating SSH key...
	I1204 15:32:58.317226    9562 main.go:141] libmachine: Creating Disk image...
	I1204 15:32:58.317235    9562 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 15:32:58.317450    9562 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/offline-docker-547000/disk.qcow2.raw /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/offline-docker-547000/disk.qcow2
	I1204 15:32:58.327063    9562 main.go:141] libmachine: STDOUT: 
	I1204 15:32:58.327086    9562 main.go:141] libmachine: STDERR: 
	I1204 15:32:58.327153    9562 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/offline-docker-547000/disk.qcow2 +20000M
	I1204 15:32:58.335643    9562 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 15:32:58.335661    9562 main.go:141] libmachine: STDERR: 
	I1204 15:32:58.335675    9562 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/offline-docker-547000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/offline-docker-547000/disk.qcow2
	I1204 15:32:58.335679    9562 main.go:141] libmachine: Starting QEMU VM...
	I1204 15:32:58.335688    9562 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:32:58.335718    9562 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/offline-docker-547000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/offline-docker-547000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/offline-docker-547000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:64:59:cf:0d:c4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/offline-docker-547000/disk.qcow2
	I1204 15:32:58.337568    9562 main.go:141] libmachine: STDOUT: 
	I1204 15:32:58.337581    9562 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:32:58.337595    9562 client.go:171] duration metric: took 315.670334ms to LocalClient.Create
	I1204 15:33:00.339826    9562 start.go:128] duration metric: took 2.376525958s to createHost
	I1204 15:33:00.339911    9562 start.go:83] releasing machines lock for "offline-docker-547000", held for 2.377101541s
	W1204 15:33:00.340432    9562 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-547000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-547000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:33:00.359178    9562 out.go:201] 
	W1204 15:33:00.365244    9562 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 15:33:00.365296    9562 out.go:270] * 
	* 
	W1204 15:33:00.367902    9562 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 15:33:00.381168    9562 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-547000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-12-04 15:33:00.396955 -0800 PST m=+725.424195751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-547000 -n offline-docker-547000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-547000 -n offline-docker-547000: exit status 7 (72.017917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-547000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-547000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-547000
--- FAIL: TestOffline (10.05s)

                                                
                                    
x
+
TestAddons/Setup (10.11s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-057000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-057000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: exit status 80 (10.106668416s)

                                                
                                                
-- stdout --
	* [addons-057000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-057000" primary control-plane node in "addons-057000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-057000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:21:30.804685    7579 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:21:30.804844    7579 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:21:30.804847    7579 out.go:358] Setting ErrFile to fd 2...
	I1204 15:21:30.804850    7579 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:21:30.804985    7579 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:21:30.806139    7579 out.go:352] Setting JSON to false
	I1204 15:21:30.823757    7579 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4860,"bootTime":1733349630,"procs":547,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 15:21:30.823829    7579 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:21:30.828429    7579 out.go:177] * [addons-057000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 15:21:30.835456    7579 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 15:21:30.835489    7579 notify.go:220] Checking for updates...
	I1204 15:21:30.843446    7579 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	I1204 15:21:30.844918    7579 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 15:21:30.848405    7579 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:21:30.851452    7579 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	I1204 15:21:30.854448    7579 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 15:21:30.857656    7579 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:21:30.862442    7579 out.go:177] * Using the qemu2 driver based on user configuration
	I1204 15:21:30.869433    7579 start.go:297] selected driver: qemu2
	I1204 15:21:30.869441    7579 start.go:901] validating driver "qemu2" against <nil>
	I1204 15:21:30.869448    7579 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 15:21:30.872025    7579 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 15:21:30.875429    7579 out.go:177] * Automatically selected the socket_vmnet network
	I1204 15:21:30.878593    7579 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 15:21:30.878625    7579 cni.go:84] Creating CNI manager for ""
	I1204 15:21:30.878653    7579 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 15:21:30.878661    7579 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1204 15:21:30.878701    7579 start.go:340] cluster config:
	{Name:addons-057000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-057000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:21:30.883333    7579 iso.go:125] acquiring lock: {Name:mk798ab7885b095925d55c843b0600e7ea181045 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:21:30.891436    7579 out.go:177] * Starting "addons-057000" primary control-plane node in "addons-057000" cluster
	I1204 15:21:30.895338    7579 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:21:30.895353    7579 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 15:21:30.895361    7579 cache.go:56] Caching tarball of preloaded images
	I1204 15:21:30.895438    7579 preload.go:172] Found /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 15:21:30.895445    7579 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 15:21:30.895657    7579 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/addons-057000/config.json ...
	I1204 15:21:30.895670    7579 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/addons-057000/config.json: {Name:mk99595e45dae9a71e49c31353e97c1dc3ca9084 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:21:30.896127    7579 start.go:360] acquireMachinesLock for addons-057000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:21:30.896225    7579 start.go:364] duration metric: took 91.875µs to acquireMachinesLock for "addons-057000"
	I1204 15:21:30.896242    7579 start.go:93] Provisioning new machine with config: &{Name:addons-057000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:addons-057000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:21:30.896286    7579 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 15:21:30.901502    7579 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1204 15:21:30.919891    7579 start.go:159] libmachine.API.Create for "addons-057000" (driver="qemu2")
	I1204 15:21:30.919918    7579 client.go:168] LocalClient.Create starting
	I1204 15:21:30.920070    7579 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem
	I1204 15:21:31.044050    7579 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem
	I1204 15:21:31.103083    7579 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-6982/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 15:21:31.298463    7579 main.go:141] libmachine: Creating SSH key...
	I1204 15:21:31.435011    7579 main.go:141] libmachine: Creating Disk image...
	I1204 15:21:31.435018    7579 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 15:21:31.435263    7579 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/addons-057000/disk.qcow2.raw /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/addons-057000/disk.qcow2
	I1204 15:21:31.445732    7579 main.go:141] libmachine: STDOUT: 
	I1204 15:21:31.445759    7579 main.go:141] libmachine: STDERR: 
	I1204 15:21:31.445813    7579 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/addons-057000/disk.qcow2 +20000M
	I1204 15:21:31.454224    7579 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 15:21:31.454239    7579 main.go:141] libmachine: STDERR: 
	I1204 15:21:31.454266    7579 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/addons-057000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/addons-057000/disk.qcow2
	I1204 15:21:31.454271    7579 main.go:141] libmachine: Starting QEMU VM...
	I1204 15:21:31.454309    7579 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:21:31.454330    7579 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/addons-057000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/addons-057000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/addons-057000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:a6:a1:02:aa:75 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/addons-057000/disk.qcow2
	I1204 15:21:31.456083    7579 main.go:141] libmachine: STDOUT: 
	I1204 15:21:31.456095    7579 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:21:31.456123    7579 client.go:171] duration metric: took 536.167916ms to LocalClient.Create
	I1204 15:21:33.458384    7579 start.go:128] duration metric: took 2.561966333s to createHost
	I1204 15:21:33.458442    7579 start.go:83] releasing machines lock for "addons-057000", held for 2.5621005s
	W1204 15:21:33.458502    7579 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:21:33.469686    7579 out.go:177] * Deleting "addons-057000" in qemu2 ...
	W1204 15:21:33.499510    7579 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:21:33.499531    7579 start.go:729] Will try again in 5 seconds ...
	I1204 15:21:38.501949    7579 start.go:360] acquireMachinesLock for addons-057000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:21:38.502630    7579 start.go:364] duration metric: took 510.75µs to acquireMachinesLock for "addons-057000"
	I1204 15:21:38.502779    7579 start.go:93] Provisioning new machine with config: &{Name:addons-057000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:addons-057000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:21:38.503059    7579 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 15:21:38.513735    7579 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1204 15:21:38.563195    7579 start.go:159] libmachine.API.Create for "addons-057000" (driver="qemu2")
	I1204 15:21:38.563267    7579 client.go:168] LocalClient.Create starting
	I1204 15:21:38.563440    7579 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem
	I1204 15:21:38.563519    7579 main.go:141] libmachine: Decoding PEM data...
	I1204 15:21:38.563534    7579 main.go:141] libmachine: Parsing certificate...
	I1204 15:21:38.563646    7579 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem
	I1204 15:21:38.563703    7579 main.go:141] libmachine: Decoding PEM data...
	I1204 15:21:38.563715    7579 main.go:141] libmachine: Parsing certificate...
	I1204 15:21:38.564515    7579 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-6982/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 15:21:38.737741    7579 main.go:141] libmachine: Creating SSH key...
	I1204 15:21:38.811915    7579 main.go:141] libmachine: Creating Disk image...
	I1204 15:21:38.811921    7579 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 15:21:38.812123    7579 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/addons-057000/disk.qcow2.raw /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/addons-057000/disk.qcow2
	I1204 15:21:38.822344    7579 main.go:141] libmachine: STDOUT: 
	I1204 15:21:38.822360    7579 main.go:141] libmachine: STDERR: 
	I1204 15:21:38.822415    7579 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/addons-057000/disk.qcow2 +20000M
	I1204 15:21:38.830882    7579 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 15:21:38.830915    7579 main.go:141] libmachine: STDERR: 
	I1204 15:21:38.830928    7579 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/addons-057000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/addons-057000/disk.qcow2
	I1204 15:21:38.830933    7579 main.go:141] libmachine: Starting QEMU VM...
	I1204 15:21:38.830941    7579 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:21:38.830978    7579 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/addons-057000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/addons-057000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/addons-057000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:f3:23:62:18:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/addons-057000/disk.qcow2
	I1204 15:21:38.832803    7579 main.go:141] libmachine: STDOUT: 
	I1204 15:21:38.832827    7579 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:21:38.832839    7579 client.go:171] duration metric: took 269.549292ms to LocalClient.Create
	I1204 15:21:40.835182    7579 start.go:128] duration metric: took 2.33198725s to createHost
	I1204 15:21:40.835258    7579 start.go:83] releasing machines lock for "addons-057000", held for 2.332532875s
	W1204 15:21:40.835606    7579 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p addons-057000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-057000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:21:40.843753    7579 out.go:201] 
	W1204 15:21:40.852781    7579 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 15:21:40.852831    7579 out.go:270] * 
	* 
	W1204 15:21:40.855795    7579 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 15:21:40.863623    7579 out.go:201] 

                                                
                                                
** /stderr **
addons_test.go:109: out/minikube-darwin-arm64 start -p addons-057000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher failed: exit status 80
--- FAIL: TestAddons/Setup (10.11s)

                                                
                                    
x
+
TestCertOptions (10.24s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-100000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-100000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.9543675s)

                                                
                                                
-- stdout --
	* [cert-options-100000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-100000" primary control-plane node in "cert-options-100000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-100000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-100000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-100000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-100000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-100000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (86.186625ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-100000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-100000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-100000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-100000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-100000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-100000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (47.173375ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-100000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-100000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-100000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-100000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-100000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-12-04 15:33:31.863219 -0800 PST m=+756.890168959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-100000 -n cert-options-100000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-100000 -n cert-options-100000: exit status 7 (35.320417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-100000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-100000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-100000
--- FAIL: TestCertOptions (10.24s)

                                                
                                    
x
+
TestCertExpiration (195.27s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-397000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-397000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.902708208s)

                                                
                                                
-- stdout --
	* [cert-expiration-397000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-397000" primary control-plane node in "cert-expiration-397000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-397000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-397000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-397000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-397000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-397000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.218383167s)

                                                
                                                
-- stdout --
	* [cert-expiration-397000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-397000" primary control-plane node in "cert-expiration-397000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-397000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-397000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-397000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-397000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-397000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-397000" primary control-plane node in "cert-expiration-397000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-397000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-397000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-397000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-12-04 15:36:31.804117 -0800 PST m=+936.829407376
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-397000 -n cert-expiration-397000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-397000 -n cert-expiration-397000: exit status 7 (65.465041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-397000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-397000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-397000
--- FAIL: TestCertExpiration (195.27s)

                                                
                                    
x
+
TestDockerFlags (10.19s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-438000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-438000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.9364215s)

                                                
                                                
-- stdout --
	* [docker-flags-438000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-438000" primary control-plane node in "docker-flags-438000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-438000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:33:11.581179    9763 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:33:11.581352    9763 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:33:11.581355    9763 out.go:358] Setting ErrFile to fd 2...
	I1204 15:33:11.581358    9763 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:33:11.581476    9763 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:33:11.582653    9763 out.go:352] Setting JSON to false
	I1204 15:33:11.600703    9763 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5561,"bootTime":1733349630,"procs":545,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 15:33:11.600766    9763 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:33:11.606549    9763 out.go:177] * [docker-flags-438000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 15:33:11.613450    9763 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 15:33:11.613498    9763 notify.go:220] Checking for updates...
	I1204 15:33:11.620449    9763 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	I1204 15:33:11.623432    9763 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 15:33:11.626428    9763 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:33:11.629368    9763 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	I1204 15:33:11.632355    9763 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 15:33:11.635772    9763 config.go:182] Loaded profile config "force-systemd-flag-064000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:33:11.635850    9763 config.go:182] Loaded profile config "multinode-093000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:33:11.635905    9763 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:33:11.640356    9763 out.go:177] * Using the qemu2 driver based on user configuration
	I1204 15:33:11.647399    9763 start.go:297] selected driver: qemu2
	I1204 15:33:11.647406    9763 start.go:901] validating driver "qemu2" against <nil>
	I1204 15:33:11.647411    9763 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 15:33:11.649975    9763 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 15:33:11.654382    9763 out.go:177] * Automatically selected the socket_vmnet network
	I1204 15:33:11.657463    9763 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I1204 15:33:11.657477    9763 cni.go:84] Creating CNI manager for ""
	I1204 15:33:11.657500    9763 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 15:33:11.657510    9763 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1204 15:33:11.657539    9763 start.go:340] cluster config:
	{Name:docker-flags-438000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:docker-flags-438000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:33:11.662287    9763 iso.go:125] acquiring lock: {Name:mk798ab7885b095925d55c843b0600e7ea181045 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:33:11.669400    9763 out.go:177] * Starting "docker-flags-438000" primary control-plane node in "docker-flags-438000" cluster
	I1204 15:33:11.673462    9763 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:33:11.673481    9763 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 15:33:11.673492    9763 cache.go:56] Caching tarball of preloaded images
	I1204 15:33:11.673598    9763 preload.go:172] Found /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 15:33:11.673605    9763 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 15:33:11.673673    9763 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/docker-flags-438000/config.json ...
	I1204 15:33:11.673684    9763 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/docker-flags-438000/config.json: {Name:mk1c3a2215f2621bbf92965b8db1e0b93e2c4ad5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:33:11.674134    9763 start.go:360] acquireMachinesLock for docker-flags-438000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:33:11.674188    9763 start.go:364] duration metric: took 44.917µs to acquireMachinesLock for "docker-flags-438000"
	I1204 15:33:11.674202    9763 start.go:93] Provisioning new machine with config: &{Name:docker-flags-438000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:docker-flags-438000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:33:11.674239    9763 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 15:33:11.678475    9763 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1204 15:33:11.695664    9763 start.go:159] libmachine.API.Create for "docker-flags-438000" (driver="qemu2")
	I1204 15:33:11.695697    9763 client.go:168] LocalClient.Create starting
	I1204 15:33:11.695770    9763 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem
	I1204 15:33:11.695810    9763 main.go:141] libmachine: Decoding PEM data...
	I1204 15:33:11.695821    9763 main.go:141] libmachine: Parsing certificate...
	I1204 15:33:11.695863    9763 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem
	I1204 15:33:11.695893    9763 main.go:141] libmachine: Decoding PEM data...
	I1204 15:33:11.695901    9763 main.go:141] libmachine: Parsing certificate...
	I1204 15:33:11.696293    9763 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-6982/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 15:33:11.857638    9763 main.go:141] libmachine: Creating SSH key...
	I1204 15:33:11.947299    9763 main.go:141] libmachine: Creating Disk image...
	I1204 15:33:11.947305    9763 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 15:33:11.947506    9763 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/docker-flags-438000/disk.qcow2.raw /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/docker-flags-438000/disk.qcow2
	I1204 15:33:11.957365    9763 main.go:141] libmachine: STDOUT: 
	I1204 15:33:11.957384    9763 main.go:141] libmachine: STDERR: 
	I1204 15:33:11.957437    9763 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/docker-flags-438000/disk.qcow2 +20000M
	I1204 15:33:11.966033    9763 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 15:33:11.966048    9763 main.go:141] libmachine: STDERR: 
	I1204 15:33:11.966066    9763 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/docker-flags-438000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/docker-flags-438000/disk.qcow2
	I1204 15:33:11.966071    9763 main.go:141] libmachine: Starting QEMU VM...
	I1204 15:33:11.966086    9763 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:33:11.966113    9763 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/docker-flags-438000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/docker-flags-438000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/docker-flags-438000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:17:83:ab:3f:03 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/docker-flags-438000/disk.qcow2
	I1204 15:33:11.967971    9763 main.go:141] libmachine: STDOUT: 
	I1204 15:33:11.967994    9763 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:33:11.968015    9763 client.go:171] duration metric: took 272.309834ms to LocalClient.Create
	I1204 15:33:13.970266    9763 start.go:128] duration metric: took 2.295978542s to createHost
	I1204 15:33:13.970357    9763 start.go:83] releasing machines lock for "docker-flags-438000", held for 2.296137209s
	W1204 15:33:13.970491    9763 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:33:13.988021    9763 out.go:177] * Deleting "docker-flags-438000" in qemu2 ...
	W1204 15:33:14.026671    9763 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:33:14.026709    9763 start.go:729] Will try again in 5 seconds ...
	I1204 15:33:19.029027    9763 start.go:360] acquireMachinesLock for docker-flags-438000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:33:19.058310    9763 start.go:364] duration metric: took 29.154125ms to acquireMachinesLock for "docker-flags-438000"
	I1204 15:33:19.058495    9763 start.go:93] Provisioning new machine with config: &{Name:docker-flags-438000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:docker-flags-438000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:33:19.058774    9763 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 15:33:19.074475    9763 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1204 15:33:19.120713    9763 start.go:159] libmachine.API.Create for "docker-flags-438000" (driver="qemu2")
	I1204 15:33:19.120754    9763 client.go:168] LocalClient.Create starting
	I1204 15:33:19.120891    9763 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem
	I1204 15:33:19.120973    9763 main.go:141] libmachine: Decoding PEM data...
	I1204 15:33:19.120992    9763 main.go:141] libmachine: Parsing certificate...
	I1204 15:33:19.121066    9763 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem
	I1204 15:33:19.121121    9763 main.go:141] libmachine: Decoding PEM data...
	I1204 15:33:19.121135    9763 main.go:141] libmachine: Parsing certificate...
	I1204 15:33:19.121806    9763 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-6982/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 15:33:19.296490    9763 main.go:141] libmachine: Creating SSH key...
	I1204 15:33:19.407474    9763 main.go:141] libmachine: Creating Disk image...
	I1204 15:33:19.407480    9763 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 15:33:19.407702    9763 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/docker-flags-438000/disk.qcow2.raw /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/docker-flags-438000/disk.qcow2
	I1204 15:33:19.417903    9763 main.go:141] libmachine: STDOUT: 
	I1204 15:33:19.417924    9763 main.go:141] libmachine: STDERR: 
	I1204 15:33:19.417989    9763 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/docker-flags-438000/disk.qcow2 +20000M
	I1204 15:33:19.426375    9763 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 15:33:19.426392    9763 main.go:141] libmachine: STDERR: 
	I1204 15:33:19.426401    9763 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/docker-flags-438000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/docker-flags-438000/disk.qcow2
	I1204 15:33:19.426410    9763 main.go:141] libmachine: Starting QEMU VM...
	I1204 15:33:19.426421    9763 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:33:19.426455    9763 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/docker-flags-438000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/docker-flags-438000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/docker-flags-438000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:c6:5c:98:ba:44 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/docker-flags-438000/disk.qcow2
	I1204 15:33:19.428338    9763 main.go:141] libmachine: STDOUT: 
	I1204 15:33:19.428352    9763 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:33:19.428364    9763 client.go:171] duration metric: took 307.602417ms to LocalClient.Create
	I1204 15:33:21.430614    9763 start.go:128] duration metric: took 2.371775541s to createHost
	I1204 15:33:21.430712    9763 start.go:83] releasing machines lock for "docker-flags-438000", held for 2.372323958s
	W1204 15:33:21.431267    9763 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-438000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-438000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:33:21.444957    9763 out.go:201] 
	W1204 15:33:21.458078    9763 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 15:33:21.458114    9763 out.go:270] * 
	* 
	W1204 15:33:21.459962    9763 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 15:33:21.468950    9763 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-438000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-438000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-438000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (80.732666ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-438000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-438000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-438000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-438000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-438000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-438000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-438000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-438000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-438000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (47.765166ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-438000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-438000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-438000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-438000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-438000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-438000\"\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-12-04 15:33:21.616851 -0800 PST m=+746.643895876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-438000 -n docker-flags-438000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-438000 -n docker-flags-438000: exit status 7 (33.450584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-438000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-438000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-438000
--- FAIL: TestDockerFlags (10.19s)

                                                
                                    
x
+
TestForceSystemdFlag (10.23s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-064000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-064000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.016859375s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-064000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-064000" primary control-plane node in "force-systemd-flag-064000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-064000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:33:06.480722    9738 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:33:06.480896    9738 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:33:06.480899    9738 out.go:358] Setting ErrFile to fd 2...
	I1204 15:33:06.480902    9738 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:33:06.481031    9738 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:33:06.482224    9738 out.go:352] Setting JSON to false
	I1204 15:33:06.499990    9738 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5556,"bootTime":1733349630,"procs":545,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 15:33:06.500063    9738 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:33:06.507151    9738 out.go:177] * [force-systemd-flag-064000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 15:33:06.517146    9738 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 15:33:06.517179    9738 notify.go:220] Checking for updates...
	I1204 15:33:06.527054    9738 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	I1204 15:33:06.531110    9738 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 15:33:06.534122    9738 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:33:06.537017    9738 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	I1204 15:33:06.540112    9738 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 15:33:06.543447    9738 config.go:182] Loaded profile config "force-systemd-env-829000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:33:06.543528    9738 config.go:182] Loaded profile config "multinode-093000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:33:06.543583    9738 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:33:06.547090    9738 out.go:177] * Using the qemu2 driver based on user configuration
	I1204 15:33:06.554129    9738 start.go:297] selected driver: qemu2
	I1204 15:33:06.554138    9738 start.go:901] validating driver "qemu2" against <nil>
	I1204 15:33:06.554147    9738 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 15:33:06.556901    9738 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 15:33:06.558232    9738 out.go:177] * Automatically selected the socket_vmnet network
	I1204 15:33:06.562150    9738 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1204 15:33:06.562162    9738 cni.go:84] Creating CNI manager for ""
	I1204 15:33:06.562194    9738 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 15:33:06.562201    9738 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1204 15:33:06.562239    9738 start.go:340] cluster config:
	{Name:force-systemd-flag-064000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-flag-064000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:33:06.567248    9738 iso.go:125] acquiring lock: {Name:mk798ab7885b095925d55c843b0600e7ea181045 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:33:06.575093    9738 out.go:177] * Starting "force-systemd-flag-064000" primary control-plane node in "force-systemd-flag-064000" cluster
	I1204 15:33:06.579145    9738 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:33:06.579163    9738 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 15:33:06.579171    9738 cache.go:56] Caching tarball of preloaded images
	I1204 15:33:06.579260    9738 preload.go:172] Found /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 15:33:06.579266    9738 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 15:33:06.579329    9738 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/force-systemd-flag-064000/config.json ...
	I1204 15:33:06.579340    9738 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/force-systemd-flag-064000/config.json: {Name:mk3ecf9314cd6efd75a7eee9038838991e1a59a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:33:06.579898    9738 start.go:360] acquireMachinesLock for force-systemd-flag-064000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:33:06.579952    9738 start.go:364] duration metric: took 44.708µs to acquireMachinesLock for "force-systemd-flag-064000"
	I1204 15:33:06.579966    9738 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-064000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-flag-064000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:33:06.579999    9738 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 15:33:06.589111    9738 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1204 15:33:06.607718    9738 start.go:159] libmachine.API.Create for "force-systemd-flag-064000" (driver="qemu2")
	I1204 15:33:06.607749    9738 client.go:168] LocalClient.Create starting
	I1204 15:33:06.607828    9738 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem
	I1204 15:33:06.607869    9738 main.go:141] libmachine: Decoding PEM data...
	I1204 15:33:06.607881    9738 main.go:141] libmachine: Parsing certificate...
	I1204 15:33:06.607929    9738 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem
	I1204 15:33:06.607960    9738 main.go:141] libmachine: Decoding PEM data...
	I1204 15:33:06.607969    9738 main.go:141] libmachine: Parsing certificate...
	I1204 15:33:06.608491    9738 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-6982/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 15:33:06.768986    9738 main.go:141] libmachine: Creating SSH key...
	I1204 15:33:06.893673    9738 main.go:141] libmachine: Creating Disk image...
	I1204 15:33:06.893681    9738 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 15:33:06.893880    9738 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/force-systemd-flag-064000/disk.qcow2.raw /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/force-systemd-flag-064000/disk.qcow2
	I1204 15:33:06.904033    9738 main.go:141] libmachine: STDOUT: 
	I1204 15:33:06.904055    9738 main.go:141] libmachine: STDERR: 
	I1204 15:33:06.904111    9738 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/force-systemd-flag-064000/disk.qcow2 +20000M
	I1204 15:33:06.912615    9738 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 15:33:06.912630    9738 main.go:141] libmachine: STDERR: 
	I1204 15:33:06.912643    9738 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/force-systemd-flag-064000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/force-systemd-flag-064000/disk.qcow2
	I1204 15:33:06.912648    9738 main.go:141] libmachine: Starting QEMU VM...
	I1204 15:33:06.912659    9738 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:33:06.912689    9738 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/force-systemd-flag-064000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/force-systemd-flag-064000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/force-systemd-flag-064000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:7d:18:0a:d7:fd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/force-systemd-flag-064000/disk.qcow2
	I1204 15:33:06.914515    9738 main.go:141] libmachine: STDOUT: 
	I1204 15:33:06.914525    9738 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:33:06.914545    9738 client.go:171] duration metric: took 306.788584ms to LocalClient.Create
	I1204 15:33:08.916774    9738 start.go:128] duration metric: took 2.336723791s to createHost
	I1204 15:33:08.916879    9738 start.go:83] releasing machines lock for "force-systemd-flag-064000", held for 2.33689525s
	W1204 15:33:08.917006    9738 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:33:08.945280    9738 out.go:177] * Deleting "force-systemd-flag-064000" in qemu2 ...
	W1204 15:33:08.970279    9738 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:33:08.970308    9738 start.go:729] Will try again in 5 seconds ...
	I1204 15:33:13.972539    9738 start.go:360] acquireMachinesLock for force-systemd-flag-064000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:33:13.972915    9738 start.go:364] duration metric: took 287.5µs to acquireMachinesLock for "force-systemd-flag-064000"
	I1204 15:33:13.972979    9738 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-064000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-flag-064000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:33:13.973252    9738 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 15:33:14.001023    9738 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1204 15:33:14.054258    9738 start.go:159] libmachine.API.Create for "force-systemd-flag-064000" (driver="qemu2")
	I1204 15:33:14.054327    9738 client.go:168] LocalClient.Create starting
	I1204 15:33:14.054484    9738 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem
	I1204 15:33:14.054579    9738 main.go:141] libmachine: Decoding PEM data...
	I1204 15:33:14.054597    9738 main.go:141] libmachine: Parsing certificate...
	I1204 15:33:14.054662    9738 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem
	I1204 15:33:14.054718    9738 main.go:141] libmachine: Decoding PEM data...
	I1204 15:33:14.054735    9738 main.go:141] libmachine: Parsing certificate...
	I1204 15:33:14.055434    9738 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-6982/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 15:33:14.226745    9738 main.go:141] libmachine: Creating SSH key...
	I1204 15:33:14.389984    9738 main.go:141] libmachine: Creating Disk image...
	I1204 15:33:14.389991    9738 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 15:33:14.390201    9738 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/force-systemd-flag-064000/disk.qcow2.raw /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/force-systemd-flag-064000/disk.qcow2
	I1204 15:33:14.400646    9738 main.go:141] libmachine: STDOUT: 
	I1204 15:33:14.400668    9738 main.go:141] libmachine: STDERR: 
	I1204 15:33:14.400731    9738 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/force-systemd-flag-064000/disk.qcow2 +20000M
	I1204 15:33:14.409125    9738 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 15:33:14.409143    9738 main.go:141] libmachine: STDERR: 
	I1204 15:33:14.409169    9738 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/force-systemd-flag-064000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/force-systemd-flag-064000/disk.qcow2
	I1204 15:33:14.409174    9738 main.go:141] libmachine: Starting QEMU VM...
	I1204 15:33:14.409185    9738 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:33:14.409215    9738 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/force-systemd-flag-064000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/force-systemd-flag-064000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/force-systemd-flag-064000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:6c:13:8d:a7:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/force-systemd-flag-064000/disk.qcow2
	I1204 15:33:14.411023    9738 main.go:141] libmachine: STDOUT: 
	I1204 15:33:14.411043    9738 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:33:14.411056    9738 client.go:171] duration metric: took 356.72ms to LocalClient.Create
	I1204 15:33:16.413331    9738 start.go:128] duration metric: took 2.440018375s to createHost
	I1204 15:33:16.413399    9738 start.go:83] releasing machines lock for "force-systemd-flag-064000", held for 2.440438s
	W1204 15:33:16.413813    9738 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-064000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-064000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:33:16.427517    9738 out.go:201] 
	W1204 15:33:16.439854    9738 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 15:33:16.439901    9738 out.go:270] * 
	* 
	W1204 15:33:16.442555    9738 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 15:33:16.451514    9738 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-064000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-064000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-064000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (88.485833ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-064000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-064000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-064000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-12-04 15:33:16.55763 -0800 PST m=+741.584721459
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-064000 -n force-systemd-flag-064000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-064000 -n force-systemd-flag-064000: exit status 7 (37.194875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-064000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-064000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-064000
--- FAIL: TestForceSystemdFlag (10.23s)

                                                
                                    
x
+
TestForceSystemdEnv (10.99s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-829000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
I1204 15:33:00.891347    7495 install.go:79] stdout: 
W1204 15:33:00.891513    7495 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2448553324/001/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2448553324/001/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1204 15:33:00.891530    7495 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2448553324/001/docker-machine-driver-hyperkit]
I1204 15:33:00.906840    7495 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2448553324/001/docker-machine-driver-hyperkit]
I1204 15:33:00.919173    7495 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2448553324/001/docker-machine-driver-hyperkit]
I1204 15:33:00.930318    7495 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2448553324/001/docker-machine-driver-hyperkit]
I1204 15:33:00.953203    7495 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1204 15:33:00.953344    7495 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-older-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
I1204 15:33:02.739273    7495 install.go:137] /Users/jenkins/workspace/testdata/hyperkit-driver-older-version/docker-machine-driver-hyperkit version is 1.2.0
W1204 15:33:02.739306    7495 install.go:62] docker-machine-driver-hyperkit: docker-machine-driver-hyperkit is version 1.2.0, want 1.11.0
W1204 15:33:02.739355    7495 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1204 15:33:02.739399    7495 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2448553324/002/docker-machine-driver-hyperkit
I1204 15:33:03.131836    7495 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2448553324/002/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x1074e56e0 0x1074e56e0 0x1074e56e0 0x1074e56e0 0x1074e56e0 0x1074e56e0 0x1074e56e0] Decompressors:map[bz2:0x14000907140 gz:0x14000907148 tar:0x140009070e0 tar.bz2:0x140009070f0 tar.gz:0x14000907100 tar.xz:0x14000907110 tar.zst:0x14000907130 tbz2:0x140009070f0 tgz:0x14000907100 txz:0x14000907110 tzst:0x14000907130 xz:0x14000907160 zip:0x14000907170 zst:0x14000907168] Getters:map[file:0x140015625f0 http:0x140004fe500 https:0x140004fe550] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1204 15:33:03.131944    7495 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2448553324/002/docker-machine-driver-hyperkit
I1204 15:33:06.394165    7495 install.go:79] stdout: 
W1204 15:33:06.394359    7495 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2448553324/002/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2448553324/002/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1204 15:33:06.394404    7495 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2448553324/002/docker-machine-driver-hyperkit]
I1204 15:33:06.411996    7495 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2448553324/002/docker-machine-driver-hyperkit]
I1204 15:33:06.426613    7495 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2448553324/002/docker-machine-driver-hyperkit]
I1204 15:33:06.437693    7495 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2448553324/002/docker-machine-driver-hyperkit]
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-829000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.786914917s)

                                                
                                                
-- stdout --
	* [force-systemd-env-829000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-829000" primary control-plane node in "force-systemd-env-829000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-829000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:33:00.588689    9702 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:33:00.588842    9702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:33:00.588846    9702 out.go:358] Setting ErrFile to fd 2...
	I1204 15:33:00.588848    9702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:33:00.588966    9702 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:33:00.590147    9702 out.go:352] Setting JSON to false
	I1204 15:33:00.608404    9702 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5550,"bootTime":1733349630,"procs":544,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 15:33:00.608475    9702 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:33:00.614146    9702 out.go:177] * [force-systemd-env-829000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 15:33:00.622154    9702 notify.go:220] Checking for updates...
	I1204 15:33:00.625936    9702 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 15:33:00.634088    9702 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	I1204 15:33:00.641063    9702 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 15:33:00.648044    9702 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:33:00.660143    9702 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	I1204 15:33:00.669067    9702 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I1204 15:33:00.672442    9702 config.go:182] Loaded profile config "multinode-093000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:33:00.672491    9702 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:33:00.676083    9702 out.go:177] * Using the qemu2 driver based on user configuration
	I1204 15:33:00.684049    9702 start.go:297] selected driver: qemu2
	I1204 15:33:00.684054    9702 start.go:901] validating driver "qemu2" against <nil>
	I1204 15:33:00.684059    9702 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 15:33:00.686644    9702 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 15:33:00.691080    9702 out.go:177] * Automatically selected the socket_vmnet network
	I1204 15:33:00.695107    9702 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1204 15:33:00.695127    9702 cni.go:84] Creating CNI manager for ""
	I1204 15:33:00.695154    9702 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 15:33:00.695159    9702 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1204 15:33:00.695195    9702 start.go:340] cluster config:
	{Name:force-systemd-env-829000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-env-829000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:33:00.699867    9702 iso.go:125] acquiring lock: {Name:mk798ab7885b095925d55c843b0600e7ea181045 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:33:00.707952    9702 out.go:177] * Starting "force-systemd-env-829000" primary control-plane node in "force-systemd-env-829000" cluster
	I1204 15:33:00.712052    9702 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:33:00.712066    9702 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 15:33:00.712072    9702 cache.go:56] Caching tarball of preloaded images
	I1204 15:33:00.712135    9702 preload.go:172] Found /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 15:33:00.712141    9702 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 15:33:00.712198    9702 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/force-systemd-env-829000/config.json ...
	I1204 15:33:00.712209    9702 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/force-systemd-env-829000/config.json: {Name:mkb7aff84bbbf236ab3b743aa771c547d2573fe5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:33:00.712511    9702 start.go:360] acquireMachinesLock for force-systemd-env-829000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:33:00.712563    9702 start.go:364] duration metric: took 43.333µs to acquireMachinesLock for "force-systemd-env-829000"
	I1204 15:33:00.712577    9702 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-829000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-env-829000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:33:00.712600    9702 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 15:33:00.720095    9702 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1204 15:33:00.736051    9702 start.go:159] libmachine.API.Create for "force-systemd-env-829000" (driver="qemu2")
	I1204 15:33:00.736078    9702 client.go:168] LocalClient.Create starting
	I1204 15:33:00.736151    9702 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem
	I1204 15:33:00.736194    9702 main.go:141] libmachine: Decoding PEM data...
	I1204 15:33:00.736209    9702 main.go:141] libmachine: Parsing certificate...
	I1204 15:33:00.736247    9702 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem
	I1204 15:33:00.736276    9702 main.go:141] libmachine: Decoding PEM data...
	I1204 15:33:00.736285    9702 main.go:141] libmachine: Parsing certificate...
	I1204 15:33:00.736662    9702 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-6982/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 15:33:00.898963    9702 main.go:141] libmachine: Creating SSH key...
	I1204 15:33:01.055892    9702 main.go:141] libmachine: Creating Disk image...
	I1204 15:33:01.055905    9702 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 15:33:01.056172    9702 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/force-systemd-env-829000/disk.qcow2.raw /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/force-systemd-env-829000/disk.qcow2
	I1204 15:33:01.066561    9702 main.go:141] libmachine: STDOUT: 
	I1204 15:33:01.066581    9702 main.go:141] libmachine: STDERR: 
	I1204 15:33:01.066638    9702 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/force-systemd-env-829000/disk.qcow2 +20000M
	I1204 15:33:01.075592    9702 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 15:33:01.075616    9702 main.go:141] libmachine: STDERR: 
	I1204 15:33:01.075633    9702 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/force-systemd-env-829000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/force-systemd-env-829000/disk.qcow2
	I1204 15:33:01.075638    9702 main.go:141] libmachine: Starting QEMU VM...
	I1204 15:33:01.075648    9702 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:33:01.075677    9702 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/force-systemd-env-829000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/force-systemd-env-829000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/force-systemd-env-829000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:b7:d4:19:6b:36 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/force-systemd-env-829000/disk.qcow2
	I1204 15:33:01.077596    9702 main.go:141] libmachine: STDOUT: 
	I1204 15:33:01.077610    9702 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:33:01.077635    9702 client.go:171] duration metric: took 341.547708ms to LocalClient.Create
	I1204 15:33:03.079871    9702 start.go:128] duration metric: took 2.36721975s to createHost
	I1204 15:33:03.079944    9702 start.go:83] releasing machines lock for "force-systemd-env-829000", held for 2.367348875s
	W1204 15:33:03.080009    9702 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:33:03.093357    9702 out.go:177] * Deleting "force-systemd-env-829000" in qemu2 ...
	W1204 15:33:03.123135    9702 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:33:03.123159    9702 start.go:729] Will try again in 5 seconds ...
	I1204 15:33:08.125476    9702 start.go:360] acquireMachinesLock for force-systemd-env-829000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:33:08.917115    9702 start.go:364] duration metric: took 791.47825ms to acquireMachinesLock for "force-systemd-env-829000"
	I1204 15:33:08.917264    9702 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-829000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.2 ClusterName:force-systemd-env-829000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:33:08.917508    9702 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 15:33:08.932194    9702 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1204 15:33:08.982473    9702 start.go:159] libmachine.API.Create for "force-systemd-env-829000" (driver="qemu2")
	I1204 15:33:08.982525    9702 client.go:168] LocalClient.Create starting
	I1204 15:33:08.982658    9702 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem
	I1204 15:33:08.982729    9702 main.go:141] libmachine: Decoding PEM data...
	I1204 15:33:08.982746    9702 main.go:141] libmachine: Parsing certificate...
	I1204 15:33:08.982808    9702 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem
	I1204 15:33:08.982864    9702 main.go:141] libmachine: Decoding PEM data...
	I1204 15:33:08.982879    9702 main.go:141] libmachine: Parsing certificate...
	I1204 15:33:08.983481    9702 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-6982/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 15:33:09.158027    9702 main.go:141] libmachine: Creating SSH key...
	I1204 15:33:09.265075    9702 main.go:141] libmachine: Creating Disk image...
	I1204 15:33:09.265082    9702 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 15:33:09.265290    9702 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/force-systemd-env-829000/disk.qcow2.raw /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/force-systemd-env-829000/disk.qcow2
	I1204 15:33:09.275469    9702 main.go:141] libmachine: STDOUT: 
	I1204 15:33:09.275492    9702 main.go:141] libmachine: STDERR: 
	I1204 15:33:09.275554    9702 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/force-systemd-env-829000/disk.qcow2 +20000M
	I1204 15:33:09.283998    9702 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 15:33:09.284013    9702 main.go:141] libmachine: STDERR: 
	I1204 15:33:09.284024    9702 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/force-systemd-env-829000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/force-systemd-env-829000/disk.qcow2
	I1204 15:33:09.284028    9702 main.go:141] libmachine: Starting QEMU VM...
	I1204 15:33:09.284038    9702 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:33:09.284074    9702 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/force-systemd-env-829000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/force-systemd-env-829000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/force-systemd-env-829000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:9c:69:23:11:82 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/force-systemd-env-829000/disk.qcow2
	I1204 15:33:09.285912    9702 main.go:141] libmachine: STDOUT: 
	I1204 15:33:09.285926    9702 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:33:09.285939    9702 client.go:171] duration metric: took 303.405625ms to LocalClient.Create
	I1204 15:33:11.288312    9702 start.go:128] duration metric: took 2.370703917s to createHost
	I1204 15:33:11.288492    9702 start.go:83] releasing machines lock for "force-systemd-env-829000", held for 2.371264792s
	W1204 15:33:11.288876    9702 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-829000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-829000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:33:11.305600    9702 out.go:201] 
	W1204 15:33:11.313571    9702 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 15:33:11.313599    9702 out.go:270] * 
	* 
	W1204 15:33:11.316617    9702 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 15:33:11.327386    9702 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-829000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-829000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-829000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (89.41625ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-829000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-829000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-12-04 15:33:11.434497 -0800 PST m=+736.461636084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-829000 -n force-systemd-env-829000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-829000 -n force-systemd-env-829000: exit status 7 (34.607125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-829000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-829000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-829000
--- FAIL: TestForceSystemdEnv (10.99s)

                                                
                                    
x
+
TestErrorSpam/setup (9.81s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-875000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-875000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000 --driver=qemu2 : exit status 80 (9.804145875s)

                                                
                                                
-- stdout --
	* [nospam-875000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-875000" primary control-plane node in "nospam-875000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-875000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-875000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-875000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-875000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-875000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
- MINIKUBE_LOCATION=20045
- KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-875000" primary control-plane node in "nospam-875000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-875000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-875000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.81s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (10.03s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-014000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-014000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.951881333s)

                                                
                                                
-- stdout --
	* [functional-014000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-014000" primary control-plane node in "functional-014000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-014000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:61400 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:61400 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:61400 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-014000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2236: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-014000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2241: start stdout=* [functional-014000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
- MINIKUBE_LOCATION=20045
- KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-014000" primary control-plane node in "functional-014000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-014000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2246: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:61400 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:61400 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:61400 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-014000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-014000 -n functional-014000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-014000 -n functional-014000: exit status 7 (74.991417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-014000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (10.03s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.27s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1204 15:22:12.807814    7495 config.go:182] Loaded profile config "functional-014000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-014000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-014000 --alsologtostderr -v=8: exit status 80 (5.194503792s)

                                                
                                                
-- stdout --
	* [functional-014000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-014000" primary control-plane node in "functional-014000" cluster
	* Restarting existing qemu2 VM for "functional-014000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-014000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:22:12.842374    7736 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:22:12.842534    7736 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:22:12.842537    7736 out.go:358] Setting ErrFile to fd 2...
	I1204 15:22:12.842540    7736 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:22:12.842692    7736 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:22:12.843785    7736 out.go:352] Setting JSON to false
	I1204 15:22:12.861617    7736 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4902,"bootTime":1733349630,"procs":534,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 15:22:12.861685    7736 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:22:12.866385    7736 out.go:177] * [functional-014000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 15:22:12.873381    7736 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 15:22:12.873466    7736 notify.go:220] Checking for updates...
	I1204 15:22:12.881191    7736 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	I1204 15:22:12.885198    7736 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 15:22:12.886757    7736 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:22:12.890201    7736 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	I1204 15:22:12.893248    7736 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 15:22:12.896489    7736 config.go:182] Loaded profile config "functional-014000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:22:12.896538    7736 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:22:12.901211    7736 out.go:177] * Using the qemu2 driver based on existing profile
	I1204 15:22:12.908214    7736 start.go:297] selected driver: qemu2
	I1204 15:22:12.908220    7736 start.go:901] validating driver "qemu2" against &{Name:functional-014000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:functional-014000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:22:12.908267    7736 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 15:22:12.910818    7736 cni.go:84] Creating CNI manager for ""
	I1204 15:22:12.910855    7736 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 15:22:12.910888    7736 start.go:340] cluster config:
	{Name:functional-014000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-014000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:22:12.915331    7736 iso.go:125] acquiring lock: {Name:mk798ab7885b095925d55c843b0600e7ea181045 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:22:12.923225    7736 out.go:177] * Starting "functional-014000" primary control-plane node in "functional-014000" cluster
	I1204 15:22:12.927191    7736 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:22:12.927212    7736 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 15:22:12.927220    7736 cache.go:56] Caching tarball of preloaded images
	I1204 15:22:12.927306    7736 preload.go:172] Found /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 15:22:12.927311    7736 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 15:22:12.927364    7736 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/functional-014000/config.json ...
	I1204 15:22:12.927866    7736 start.go:360] acquireMachinesLock for functional-014000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:22:12.927896    7736 start.go:364] duration metric: took 24.417µs to acquireMachinesLock for "functional-014000"
	I1204 15:22:12.927906    7736 start.go:96] Skipping create...Using existing machine configuration
	I1204 15:22:12.927910    7736 fix.go:54] fixHost starting: 
	I1204 15:22:12.928041    7736 fix.go:112] recreateIfNeeded on functional-014000: state=Stopped err=<nil>
	W1204 15:22:12.928049    7736 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 15:22:12.936203    7736 out.go:177] * Restarting existing qemu2 VM for "functional-014000" ...
	I1204 15:22:12.940205    7736 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:22:12.940240    7736 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/functional-014000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/functional-014000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/functional-014000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:c5:05:9d:48:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/functional-014000/disk.qcow2
	I1204 15:22:12.942538    7736 main.go:141] libmachine: STDOUT: 
	I1204 15:22:12.942557    7736 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:22:12.942588    7736 fix.go:56] duration metric: took 14.675958ms for fixHost
	I1204 15:22:12.942593    7736 start.go:83] releasing machines lock for "functional-014000", held for 14.692791ms
	W1204 15:22:12.942601    7736 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 15:22:12.942643    7736 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:22:12.942648    7736 start.go:729] Will try again in 5 seconds ...
	I1204 15:22:17.943512    7736 start.go:360] acquireMachinesLock for functional-014000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:22:17.943953    7736 start.go:364] duration metric: took 320.291µs to acquireMachinesLock for "functional-014000"
	I1204 15:22:17.944089    7736 start.go:96] Skipping create...Using existing machine configuration
	I1204 15:22:17.944110    7736 fix.go:54] fixHost starting: 
	I1204 15:22:17.944838    7736 fix.go:112] recreateIfNeeded on functional-014000: state=Stopped err=<nil>
	W1204 15:22:17.944864    7736 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 15:22:17.953262    7736 out.go:177] * Restarting existing qemu2 VM for "functional-014000" ...
	I1204 15:22:17.957277    7736 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:22:17.957584    7736 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/functional-014000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/functional-014000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/functional-014000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:c5:05:9d:48:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/functional-014000/disk.qcow2
	I1204 15:22:17.967970    7736 main.go:141] libmachine: STDOUT: 
	I1204 15:22:17.968020    7736 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:22:17.968120    7736 fix.go:56] duration metric: took 24.012334ms for fixHost
	I1204 15:22:17.968137    7736 start.go:83] releasing machines lock for "functional-014000", held for 24.16175ms
	W1204 15:22:17.968325    7736 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-014000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-014000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:22:17.975266    7736 out.go:201] 
	W1204 15:22:17.979429    7736 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 15:22:17.979460    7736 out.go:270] * 
	* 
	W1204 15:22:17.982039    7736 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 15:22:17.989236    7736 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:661: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-014000 --alsologtostderr -v=8": exit status 80
functional_test.go:663: soft start took 5.196006375s for "functional-014000" cluster.
I1204 15:22:18.004145    7495 config.go:182] Loaded profile config "functional-014000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-014000 -n functional-014000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-014000 -n functional-014000: exit status 7 (76.847833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-014000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.27s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
functional_test.go:681: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (28.955583ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:683: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:687: expected current-context = "functional-014000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-014000 -n functional-014000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-014000 -n functional-014000: exit status 7 (34.775791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-014000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-014000 get po -A
functional_test.go:696: (dbg) Non-zero exit: kubectl --context functional-014000 get po -A: exit status 1 (27.039083ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-014000

                                                
                                                
** /stderr **
functional_test.go:698: failed to get kubectl pods: args "kubectl --context functional-014000 get po -A" : exit status 1
functional_test.go:702: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-014000\n"*: args "kubectl --context functional-014000 get po -A"
functional_test.go:705: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-014000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-014000 -n functional-014000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-014000 -n functional-014000: exit status 7 (35.183ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-014000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 ssh sudo crictl images
functional_test.go:1124: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 ssh sudo crictl images: exit status 83 (45.95ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
functional_test.go:1126: failed to get images by "out/minikube-darwin-arm64 -p functional-014000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1130: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1147: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (44.8985ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
functional_test.go:1150: failed to manually delete image "out/minikube-darwin-arm64 -p functional-014000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (45.971625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1163: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (44.710208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
functional_test.go:1165: expected "out/minikube-darwin-arm64 -p functional-014000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.18s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.75s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 kubectl -- --context functional-014000 get pods
functional_test.go:716: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 kubectl -- --context functional-014000 get pods: exit status 1 (710.390541ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-014000
	* no server found for cluster "functional-014000"

                                                
                                                
** /stderr **
functional_test.go:719: failed to get pods. args "out/minikube-darwin-arm64 -p functional-014000 kubectl -- --context functional-014000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-014000 -n functional-014000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-014000 -n functional-014000: exit status 7 (37.555667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-014000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.75s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.19s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-014000 get pods
functional_test.go:741: (dbg) Non-zero exit: out/kubectl --context functional-014000 get pods: exit status 1 (1.159628208s)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-014000
	* no server found for cluster "functional-014000"

                                                
                                                
** /stderr **
functional_test.go:744: failed to run kubectl directly. args "out/kubectl --context functional-014000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-014000 -n functional-014000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-014000 -n functional-014000: exit status 7 (34.405709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-014000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.19s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.27s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-014000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-014000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.194301s)

                                                
                                                
-- stdout --
	* [functional-014000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-014000" primary control-plane node in "functional-014000" cluster
	* Restarting existing qemu2 VM for "functional-014000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-014000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-014000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:759: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-014000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:761: restart took 5.194850417s for "functional-014000" cluster.
I1204 15:22:28.791204    7495 config.go:182] Loaded profile config "functional-014000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-014000 -n functional-014000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-014000 -n functional-014000: exit status 7 (76.205583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-014000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.27s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-014000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:810: (dbg) Non-zero exit: kubectl --context functional-014000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (29.064167ms)

                                                
                                                
** stderr ** 
	error: context "functional-014000" does not exist

                                                
                                                
** /stderr **
functional_test.go:812: failed to get components. args "kubectl --context functional-014000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-014000 -n functional-014000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-014000 -n functional-014000: exit status 7 (34.480459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-014000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 logs
functional_test.go:1236: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 logs: exit status 83 (78.707541ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-447000 | jenkins | v1.34.0 | 04 Dec 24 15:20 PST |                     |
	|         | -p download-only-447000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 04 Dec 24 15:21 PST | 04 Dec 24 15:21 PST |
	| delete  | -p download-only-447000                                                  | download-only-447000 | jenkins | v1.34.0 | 04 Dec 24 15:21 PST | 04 Dec 24 15:21 PST |
	| start   | -o=json --download-only                                                  | download-only-914000 | jenkins | v1.34.0 | 04 Dec 24 15:21 PST |                     |
	|         | -p download-only-914000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 04 Dec 24 15:21 PST | 04 Dec 24 15:21 PST |
	| delete  | -p download-only-914000                                                  | download-only-914000 | jenkins | v1.34.0 | 04 Dec 24 15:21 PST | 04 Dec 24 15:21 PST |
	| delete  | -p download-only-447000                                                  | download-only-447000 | jenkins | v1.34.0 | 04 Dec 24 15:21 PST | 04 Dec 24 15:21 PST |
	| delete  | -p download-only-914000                                                  | download-only-914000 | jenkins | v1.34.0 | 04 Dec 24 15:21 PST | 04 Dec 24 15:21 PST |
	| start   | --download-only -p                                                       | binary-mirror-489000 | jenkins | v1.34.0 | 04 Dec 24 15:21 PST |                     |
	|         | binary-mirror-489000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:61364                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-489000                                                  | binary-mirror-489000 | jenkins | v1.34.0 | 04 Dec 24 15:21 PST | 04 Dec 24 15:21 PST |
	| addons  | enable dashboard -p                                                      | addons-057000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST |                     |
	|         | addons-057000                                                            |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-057000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST |                     |
	|         | addons-057000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-057000 --wait=true                                             | addons-057000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                           |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	| delete  | -p addons-057000                                                         | addons-057000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST | 04 Dec 24 15:21 PST |
	| start   | -p nospam-875000 -n=1 --memory=2250 --wait=false                         | nospam-875000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST |                     |
	|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-875000 --log_dir                                                  | nospam-875000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-875000 --log_dir                                                  | nospam-875000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-875000 --log_dir                                                  | nospam-875000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-875000 --log_dir                                                  | nospam-875000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-875000 --log_dir                                                  | nospam-875000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-875000 --log_dir                                                  | nospam-875000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-875000 --log_dir                                                  | nospam-875000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-875000 --log_dir                                                  | nospam-875000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-875000 --log_dir                                                  | nospam-875000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-875000 --log_dir                                                  | nospam-875000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST | 04 Dec 24 15:21 PST |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-875000 --log_dir                                                  | nospam-875000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST | 04 Dec 24 15:21 PST |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-875000 --log_dir                                                  | nospam-875000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST | 04 Dec 24 15:22 PST |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-875000                                                         | nospam-875000        | jenkins | v1.34.0 | 04 Dec 24 15:22 PST | 04 Dec 24 15:22 PST |
	| start   | -p functional-014000                                                     | functional-014000    | jenkins | v1.34.0 | 04 Dec 24 15:22 PST |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-014000                                                     | functional-014000    | jenkins | v1.34.0 | 04 Dec 24 15:22 PST |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-014000 cache add                                              | functional-014000    | jenkins | v1.34.0 | 04 Dec 24 15:22 PST | 04 Dec 24 15:22 PST |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-014000 cache add                                              | functional-014000    | jenkins | v1.34.0 | 04 Dec 24 15:22 PST | 04 Dec 24 15:22 PST |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-014000 cache add                                              | functional-014000    | jenkins | v1.34.0 | 04 Dec 24 15:22 PST | 04 Dec 24 15:22 PST |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-014000 cache add                                              | functional-014000    | jenkins | v1.34.0 | 04 Dec 24 15:22 PST | 04 Dec 24 15:22 PST |
	|         | minikube-local-cache-test:functional-014000                              |                      |         |         |                     |                     |
	| cache   | functional-014000 cache delete                                           | functional-014000    | jenkins | v1.34.0 | 04 Dec 24 15:22 PST | 04 Dec 24 15:22 PST |
	|         | minikube-local-cache-test:functional-014000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 04 Dec 24 15:22 PST | 04 Dec 24 15:22 PST |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 04 Dec 24 15:22 PST | 04 Dec 24 15:22 PST |
	| ssh     | functional-014000 ssh sudo                                               | functional-014000    | jenkins | v1.34.0 | 04 Dec 24 15:22 PST |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-014000                                                        | functional-014000    | jenkins | v1.34.0 | 04 Dec 24 15:22 PST |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-014000 ssh                                                    | functional-014000    | jenkins | v1.34.0 | 04 Dec 24 15:22 PST |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-014000 cache reload                                           | functional-014000    | jenkins | v1.34.0 | 04 Dec 24 15:22 PST | 04 Dec 24 15:22 PST |
	| ssh     | functional-014000 ssh                                                    | functional-014000    | jenkins | v1.34.0 | 04 Dec 24 15:22 PST |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 04 Dec 24 15:22 PST | 04 Dec 24 15:22 PST |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 04 Dec 24 15:22 PST | 04 Dec 24 15:22 PST |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-014000 kubectl --                                             | functional-014000    | jenkins | v1.34.0 | 04 Dec 24 15:22 PST |                     |
	|         | --context functional-014000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-014000                                                     | functional-014000    | jenkins | v1.34.0 | 04 Dec 24 15:22 PST |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 15:22:23
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.2 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 15:22:23.626986    7818 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:22:23.627130    7818 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:22:23.627132    7818 out.go:358] Setting ErrFile to fd 2...
	I1204 15:22:23.627133    7818 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:22:23.627243    7818 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:22:23.628349    7818 out.go:352] Setting JSON to false
	I1204 15:22:23.645699    7818 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4913,"bootTime":1733349630,"procs":540,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 15:22:23.645779    7818 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:22:23.651132    7818 out.go:177] * [functional-014000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 15:22:23.659141    7818 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 15:22:23.659175    7818 notify.go:220] Checking for updates...
	I1204 15:22:23.668089    7818 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	I1204 15:22:23.671096    7818 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 15:22:23.674054    7818 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:22:23.677075    7818 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	I1204 15:22:23.680081    7818 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 15:22:23.683357    7818 config.go:182] Loaded profile config "functional-014000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:22:23.683399    7818 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:22:23.688099    7818 out.go:177] * Using the qemu2 driver based on existing profile
	I1204 15:22:23.695007    7818 start.go:297] selected driver: qemu2
	I1204 15:22:23.695011    7818 start.go:901] validating driver "qemu2" against &{Name:functional-014000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:functional-014000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:22:23.695058    7818 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 15:22:23.697577    7818 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 15:22:23.697596    7818 cni.go:84] Creating CNI manager for ""
	I1204 15:22:23.697620    7818 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 15:22:23.697683    7818 start.go:340] cluster config:
	{Name:functional-014000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-014000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:22:23.702257    7818 iso.go:125] acquiring lock: {Name:mk798ab7885b095925d55c843b0600e7ea181045 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:22:23.709033    7818 out.go:177] * Starting "functional-014000" primary control-plane node in "functional-014000" cluster
	I1204 15:22:23.713044    7818 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:22:23.713055    7818 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 15:22:23.713068    7818 cache.go:56] Caching tarball of preloaded images
	I1204 15:22:23.713134    7818 preload.go:172] Found /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 15:22:23.713137    7818 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 15:22:23.713186    7818 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/functional-014000/config.json ...
	I1204 15:22:23.713735    7818 start.go:360] acquireMachinesLock for functional-014000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:22:23.713784    7818 start.go:364] duration metric: took 44.375µs to acquireMachinesLock for "functional-014000"
	I1204 15:22:23.713792    7818 start.go:96] Skipping create...Using existing machine configuration
	I1204 15:22:23.713794    7818 fix.go:54] fixHost starting: 
	I1204 15:22:23.713914    7818 fix.go:112] recreateIfNeeded on functional-014000: state=Stopped err=<nil>
	W1204 15:22:23.713921    7818 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 15:22:23.722075    7818 out.go:177] * Restarting existing qemu2 VM for "functional-014000" ...
	I1204 15:22:23.726077    7818 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:22:23.726112    7818 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/functional-014000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/functional-014000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/functional-014000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:c5:05:9d:48:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/functional-014000/disk.qcow2
	I1204 15:22:23.728396    7818 main.go:141] libmachine: STDOUT: 
	I1204 15:22:23.728411    7818 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:22:23.728442    7818 fix.go:56] duration metric: took 14.645958ms for fixHost
	I1204 15:22:23.728445    7818 start.go:83] releasing machines lock for "functional-014000", held for 14.658ms
	W1204 15:22:23.728451    7818 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 15:22:23.728493    7818 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:22:23.728497    7818 start.go:729] Will try again in 5 seconds ...
	I1204 15:22:28.730779    7818 start.go:360] acquireMachinesLock for functional-014000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:22:28.731150    7818 start.go:364] duration metric: took 295.25µs to acquireMachinesLock for "functional-014000"
	I1204 15:22:28.731259    7818 start.go:96] Skipping create...Using existing machine configuration
	I1204 15:22:28.731270    7818 fix.go:54] fixHost starting: 
	I1204 15:22:28.732001    7818 fix.go:112] recreateIfNeeded on functional-014000: state=Stopped err=<nil>
	W1204 15:22:28.732017    7818 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 15:22:28.737680    7818 out.go:177] * Restarting existing qemu2 VM for "functional-014000" ...
	I1204 15:22:28.745553    7818 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:22:28.745759    7818 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/functional-014000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/functional-014000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/functional-014000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:c5:05:9d:48:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/functional-014000/disk.qcow2
	I1204 15:22:28.755158    7818 main.go:141] libmachine: STDOUT: 
	I1204 15:22:28.755207    7818 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:22:28.755337    7818 fix.go:56] duration metric: took 24.066708ms for fixHost
	I1204 15:22:28.755349    7818 start.go:83] releasing machines lock for "functional-014000", held for 24.185875ms
	W1204 15:22:28.755510    7818 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-014000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:22:28.762541    7818 out.go:201] 
	W1204 15:22:28.765661    7818 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 15:22:28.765702    7818 out.go:270] * 
	W1204 15:22:28.768364    7818 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 15:22:28.777549    7818 out.go:201] 
	
	
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
functional_test.go:1238: out/minikube-darwin-arm64 -p functional-014000 logs failed: exit status 83
functional_test.go:1228: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-447000 | jenkins | v1.34.0 | 04 Dec 24 15:20 PST |                     |
|         | -p download-only-447000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 04 Dec 24 15:21 PST | 04 Dec 24 15:21 PST |
| delete  | -p download-only-447000                                                  | download-only-447000 | jenkins | v1.34.0 | 04 Dec 24 15:21 PST | 04 Dec 24 15:21 PST |
| start   | -o=json --download-only                                                  | download-only-914000 | jenkins | v1.34.0 | 04 Dec 24 15:21 PST |                     |
|         | -p download-only-914000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.2                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 04 Dec 24 15:21 PST | 04 Dec 24 15:21 PST |
| delete  | -p download-only-914000                                                  | download-only-914000 | jenkins | v1.34.0 | 04 Dec 24 15:21 PST | 04 Dec 24 15:21 PST |
| delete  | -p download-only-447000                                                  | download-only-447000 | jenkins | v1.34.0 | 04 Dec 24 15:21 PST | 04 Dec 24 15:21 PST |
| delete  | -p download-only-914000                                                  | download-only-914000 | jenkins | v1.34.0 | 04 Dec 24 15:21 PST | 04 Dec 24 15:21 PST |
| start   | --download-only -p                                                       | binary-mirror-489000 | jenkins | v1.34.0 | 04 Dec 24 15:21 PST |                     |
|         | binary-mirror-489000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:61364                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-489000                                                  | binary-mirror-489000 | jenkins | v1.34.0 | 04 Dec 24 15:21 PST | 04 Dec 24 15:21 PST |
| addons  | enable dashboard -p                                                      | addons-057000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST |                     |
|         | addons-057000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-057000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST |                     |
|         | addons-057000                                                            |                      |         |         |                     |                     |
| start   | -p addons-057000 --wait=true                                             | addons-057000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --addons=amd-gpu-device-plugin                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
| delete  | -p addons-057000                                                         | addons-057000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST | 04 Dec 24 15:21 PST |
| start   | -p nospam-875000 -n=1 --memory=2250 --wait=false                         | nospam-875000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-875000 --log_dir                                                  | nospam-875000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-875000 --log_dir                                                  | nospam-875000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-875000 --log_dir                                                  | nospam-875000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-875000 --log_dir                                                  | nospam-875000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-875000 --log_dir                                                  | nospam-875000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-875000 --log_dir                                                  | nospam-875000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-875000 --log_dir                                                  | nospam-875000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-875000 --log_dir                                                  | nospam-875000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-875000 --log_dir                                                  | nospam-875000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-875000 --log_dir                                                  | nospam-875000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST | 04 Dec 24 15:21 PST |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-875000 --log_dir                                                  | nospam-875000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST | 04 Dec 24 15:21 PST |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-875000 --log_dir                                                  | nospam-875000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST | 04 Dec 24 15:22 PST |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-875000                                                         | nospam-875000        | jenkins | v1.34.0 | 04 Dec 24 15:22 PST | 04 Dec 24 15:22 PST |
| start   | -p functional-014000                                                     | functional-014000    | jenkins | v1.34.0 | 04 Dec 24 15:22 PST |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-014000                                                     | functional-014000    | jenkins | v1.34.0 | 04 Dec 24 15:22 PST |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-014000 cache add                                              | functional-014000    | jenkins | v1.34.0 | 04 Dec 24 15:22 PST | 04 Dec 24 15:22 PST |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-014000 cache add                                              | functional-014000    | jenkins | v1.34.0 | 04 Dec 24 15:22 PST | 04 Dec 24 15:22 PST |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-014000 cache add                                              | functional-014000    | jenkins | v1.34.0 | 04 Dec 24 15:22 PST | 04 Dec 24 15:22 PST |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-014000 cache add                                              | functional-014000    | jenkins | v1.34.0 | 04 Dec 24 15:22 PST | 04 Dec 24 15:22 PST |
|         | minikube-local-cache-test:functional-014000                              |                      |         |         |                     |                     |
| cache   | functional-014000 cache delete                                           | functional-014000    | jenkins | v1.34.0 | 04 Dec 24 15:22 PST | 04 Dec 24 15:22 PST |
|         | minikube-local-cache-test:functional-014000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 04 Dec 24 15:22 PST | 04 Dec 24 15:22 PST |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 04 Dec 24 15:22 PST | 04 Dec 24 15:22 PST |
| ssh     | functional-014000 ssh sudo                                               | functional-014000    | jenkins | v1.34.0 | 04 Dec 24 15:22 PST |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-014000                                                        | functional-014000    | jenkins | v1.34.0 | 04 Dec 24 15:22 PST |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-014000 ssh                                                    | functional-014000    | jenkins | v1.34.0 | 04 Dec 24 15:22 PST |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-014000 cache reload                                           | functional-014000    | jenkins | v1.34.0 | 04 Dec 24 15:22 PST | 04 Dec 24 15:22 PST |
| ssh     | functional-014000 ssh                                                    | functional-014000    | jenkins | v1.34.0 | 04 Dec 24 15:22 PST |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 04 Dec 24 15:22 PST | 04 Dec 24 15:22 PST |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 04 Dec 24 15:22 PST | 04 Dec 24 15:22 PST |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-014000 kubectl --                                             | functional-014000    | jenkins | v1.34.0 | 04 Dec 24 15:22 PST |                     |
|         | --context functional-014000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-014000                                                     | functional-014000    | jenkins | v1.34.0 | 04 Dec 24 15:22 PST |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/12/04 15:22:23
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.23.2 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1204 15:22:23.626986    7818 out.go:345] Setting OutFile to fd 1 ...
I1204 15:22:23.627130    7818 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 15:22:23.627132    7818 out.go:358] Setting ErrFile to fd 2...
I1204 15:22:23.627133    7818 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 15:22:23.627243    7818 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
I1204 15:22:23.628349    7818 out.go:352] Setting JSON to false
I1204 15:22:23.645699    7818 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4913,"bootTime":1733349630,"procs":540,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W1204 15:22:23.645779    7818 start.go:137] gopshost.Virtualization returned error: not implemented yet
I1204 15:22:23.651132    7818 out.go:177] * [functional-014000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
I1204 15:22:23.659141    7818 out.go:177]   - MINIKUBE_LOCATION=20045
I1204 15:22:23.659175    7818 notify.go:220] Checking for updates...
I1204 15:22:23.668089    7818 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
I1204 15:22:23.671096    7818 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I1204 15:22:23.674054    7818 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1204 15:22:23.677075    7818 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
I1204 15:22:23.680081    7818 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I1204 15:22:23.683357    7818 config.go:182] Loaded profile config "functional-014000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1204 15:22:23.683399    7818 driver.go:394] Setting default libvirt URI to qemu:///system
I1204 15:22:23.688099    7818 out.go:177] * Using the qemu2 driver based on existing profile
I1204 15:22:23.695007    7818 start.go:297] selected driver: qemu2
I1204 15:22:23.695011    7818 start.go:901] validating driver "qemu2" against &{Name:functional-014000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:functional-014000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1204 15:22:23.695058    7818 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1204 15:22:23.697577    7818 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1204 15:22:23.697596    7818 cni.go:84] Creating CNI manager for ""
I1204 15:22:23.697620    7818 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1204 15:22:23.697683    7818 start.go:340] cluster config:
{Name:functional-014000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-014000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1204 15:22:23.702257    7818 iso.go:125] acquiring lock: {Name:mk798ab7885b095925d55c843b0600e7ea181045 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1204 15:22:23.709033    7818 out.go:177] * Starting "functional-014000" primary control-plane node in "functional-014000" cluster
I1204 15:22:23.713044    7818 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
I1204 15:22:23.713055    7818 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
I1204 15:22:23.713068    7818 cache.go:56] Caching tarball of preloaded images
I1204 15:22:23.713134    7818 preload.go:172] Found /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I1204 15:22:23.713137    7818 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
I1204 15:22:23.713186    7818 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/functional-014000/config.json ...
I1204 15:22:23.713735    7818 start.go:360] acquireMachinesLock for functional-014000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1204 15:22:23.713784    7818 start.go:364] duration metric: took 44.375µs to acquireMachinesLock for "functional-014000"
I1204 15:22:23.713792    7818 start.go:96] Skipping create...Using existing machine configuration
I1204 15:22:23.713794    7818 fix.go:54] fixHost starting: 
I1204 15:22:23.713914    7818 fix.go:112] recreateIfNeeded on functional-014000: state=Stopped err=<nil>
W1204 15:22:23.713921    7818 fix.go:138] unexpected machine state, will restart: <nil>
I1204 15:22:23.722075    7818 out.go:177] * Restarting existing qemu2 VM for "functional-014000" ...
I1204 15:22:23.726077    7818 qemu.go:418] Using hvf for hardware acceleration
I1204 15:22:23.726112    7818 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/functional-014000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/functional-014000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/functional-014000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:c5:05:9d:48:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/functional-014000/disk.qcow2
I1204 15:22:23.728396    7818 main.go:141] libmachine: STDOUT: 
I1204 15:22:23.728411    7818 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1204 15:22:23.728442    7818 fix.go:56] duration metric: took 14.645958ms for fixHost
I1204 15:22:23.728445    7818 start.go:83] releasing machines lock for "functional-014000", held for 14.658ms
W1204 15:22:23.728451    7818 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1204 15:22:23.728493    7818 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1204 15:22:23.728497    7818 start.go:729] Will try again in 5 seconds ...
I1204 15:22:28.730779    7818 start.go:360] acquireMachinesLock for functional-014000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1204 15:22:28.731150    7818 start.go:364] duration metric: took 295.25µs to acquireMachinesLock for "functional-014000"
I1204 15:22:28.731259    7818 start.go:96] Skipping create...Using existing machine configuration
I1204 15:22:28.731270    7818 fix.go:54] fixHost starting: 
I1204 15:22:28.732001    7818 fix.go:112] recreateIfNeeded on functional-014000: state=Stopped err=<nil>
W1204 15:22:28.732017    7818 fix.go:138] unexpected machine state, will restart: <nil>
I1204 15:22:28.737680    7818 out.go:177] * Restarting existing qemu2 VM for "functional-014000" ...
I1204 15:22:28.745553    7818 qemu.go:418] Using hvf for hardware acceleration
I1204 15:22:28.745759    7818 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/functional-014000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/functional-014000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/functional-014000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:c5:05:9d:48:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/functional-014000/disk.qcow2
I1204 15:22:28.755158    7818 main.go:141] libmachine: STDOUT: 
I1204 15:22:28.755207    7818 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1204 15:22:28.755337    7818 fix.go:56] duration metric: took 24.066708ms for fixHost
I1204 15:22:28.755349    7818 start.go:83] releasing machines lock for "functional-014000", held for 24.185875ms
W1204 15:22:28.755510    7818 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-014000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1204 15:22:28.762541    7818 out.go:201] 
W1204 15:22:28.765661    7818 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1204 15:22:28.765702    7818 out.go:270] * 
W1204 15:22:28.768364    7818 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1204 15:22:28.777549    7818 out.go:201] 

                                                
                                                

                                                
                                                
* The control-plane node functional-014000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-014000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd323836830/001/logs.txt
functional_test.go:1228: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-447000 | jenkins | v1.34.0 | 04 Dec 24 15:20 PST |                     |
|         | -p download-only-447000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 04 Dec 24 15:21 PST | 04 Dec 24 15:21 PST |
| delete  | -p download-only-447000                                                  | download-only-447000 | jenkins | v1.34.0 | 04 Dec 24 15:21 PST | 04 Dec 24 15:21 PST |
| start   | -o=json --download-only                                                  | download-only-914000 | jenkins | v1.34.0 | 04 Dec 24 15:21 PST |                     |
|         | -p download-only-914000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.2                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 04 Dec 24 15:21 PST | 04 Dec 24 15:21 PST |
| delete  | -p download-only-914000                                                  | download-only-914000 | jenkins | v1.34.0 | 04 Dec 24 15:21 PST | 04 Dec 24 15:21 PST |
| delete  | -p download-only-447000                                                  | download-only-447000 | jenkins | v1.34.0 | 04 Dec 24 15:21 PST | 04 Dec 24 15:21 PST |
| delete  | -p download-only-914000                                                  | download-only-914000 | jenkins | v1.34.0 | 04 Dec 24 15:21 PST | 04 Dec 24 15:21 PST |
| start   | --download-only -p                                                       | binary-mirror-489000 | jenkins | v1.34.0 | 04 Dec 24 15:21 PST |                     |
|         | binary-mirror-489000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:61364                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-489000                                                  | binary-mirror-489000 | jenkins | v1.34.0 | 04 Dec 24 15:21 PST | 04 Dec 24 15:21 PST |
| addons  | enable dashboard -p                                                      | addons-057000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST |                     |
|         | addons-057000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-057000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST |                     |
|         | addons-057000                                                            |                      |         |         |                     |                     |
| start   | -p addons-057000 --wait=true                                             | addons-057000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --addons=amd-gpu-device-plugin                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
| delete  | -p addons-057000                                                         | addons-057000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST | 04 Dec 24 15:21 PST |
| start   | -p nospam-875000 -n=1 --memory=2250 --wait=false                         | nospam-875000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-875000 --log_dir                                                  | nospam-875000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-875000 --log_dir                                                  | nospam-875000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-875000 --log_dir                                                  | nospam-875000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-875000 --log_dir                                                  | nospam-875000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-875000 --log_dir                                                  | nospam-875000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-875000 --log_dir                                                  | nospam-875000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-875000 --log_dir                                                  | nospam-875000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-875000 --log_dir                                                  | nospam-875000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-875000 --log_dir                                                  | nospam-875000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-875000 --log_dir                                                  | nospam-875000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST | 04 Dec 24 15:21 PST |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-875000 --log_dir                                                  | nospam-875000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST | 04 Dec 24 15:21 PST |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-875000 --log_dir                                                  | nospam-875000        | jenkins | v1.34.0 | 04 Dec 24 15:21 PST | 04 Dec 24 15:22 PST |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-875000                                                         | nospam-875000        | jenkins | v1.34.0 | 04 Dec 24 15:22 PST | 04 Dec 24 15:22 PST |
| start   | -p functional-014000                                                     | functional-014000    | jenkins | v1.34.0 | 04 Dec 24 15:22 PST |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-014000                                                     | functional-014000    | jenkins | v1.34.0 | 04 Dec 24 15:22 PST |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-014000 cache add                                              | functional-014000    | jenkins | v1.34.0 | 04 Dec 24 15:22 PST | 04 Dec 24 15:22 PST |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-014000 cache add                                              | functional-014000    | jenkins | v1.34.0 | 04 Dec 24 15:22 PST | 04 Dec 24 15:22 PST |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-014000 cache add                                              | functional-014000    | jenkins | v1.34.0 | 04 Dec 24 15:22 PST | 04 Dec 24 15:22 PST |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-014000 cache add                                              | functional-014000    | jenkins | v1.34.0 | 04 Dec 24 15:22 PST | 04 Dec 24 15:22 PST |
|         | minikube-local-cache-test:functional-014000                              |                      |         |         |                     |                     |
| cache   | functional-014000 cache delete                                           | functional-014000    | jenkins | v1.34.0 | 04 Dec 24 15:22 PST | 04 Dec 24 15:22 PST |
|         | minikube-local-cache-test:functional-014000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 04 Dec 24 15:22 PST | 04 Dec 24 15:22 PST |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 04 Dec 24 15:22 PST | 04 Dec 24 15:22 PST |
| ssh     | functional-014000 ssh sudo                                               | functional-014000    | jenkins | v1.34.0 | 04 Dec 24 15:22 PST |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-014000                                                        | functional-014000    | jenkins | v1.34.0 | 04 Dec 24 15:22 PST |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-014000 ssh                                                    | functional-014000    | jenkins | v1.34.0 | 04 Dec 24 15:22 PST |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-014000 cache reload                                           | functional-014000    | jenkins | v1.34.0 | 04 Dec 24 15:22 PST | 04 Dec 24 15:22 PST |
| ssh     | functional-014000 ssh                                                    | functional-014000    | jenkins | v1.34.0 | 04 Dec 24 15:22 PST |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 04 Dec 24 15:22 PST | 04 Dec 24 15:22 PST |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 04 Dec 24 15:22 PST | 04 Dec 24 15:22 PST |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-014000 kubectl --                                             | functional-014000    | jenkins | v1.34.0 | 04 Dec 24 15:22 PST |                     |
|         | --context functional-014000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-014000                                                     | functional-014000    | jenkins | v1.34.0 | 04 Dec 24 15:22 PST |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/12/04 15:22:23
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.23.2 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1204 15:22:23.626986    7818 out.go:345] Setting OutFile to fd 1 ...
I1204 15:22:23.627130    7818 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 15:22:23.627132    7818 out.go:358] Setting ErrFile to fd 2...
I1204 15:22:23.627133    7818 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 15:22:23.627243    7818 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
I1204 15:22:23.628349    7818 out.go:352] Setting JSON to false
I1204 15:22:23.645699    7818 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4913,"bootTime":1733349630,"procs":540,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W1204 15:22:23.645779    7818 start.go:137] gopshost.Virtualization returned error: not implemented yet
I1204 15:22:23.651132    7818 out.go:177] * [functional-014000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
I1204 15:22:23.659141    7818 out.go:177]   - MINIKUBE_LOCATION=20045
I1204 15:22:23.659175    7818 notify.go:220] Checking for updates...
I1204 15:22:23.668089    7818 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
I1204 15:22:23.671096    7818 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I1204 15:22:23.674054    7818 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1204 15:22:23.677075    7818 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
I1204 15:22:23.680081    7818 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I1204 15:22:23.683357    7818 config.go:182] Loaded profile config "functional-014000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1204 15:22:23.683399    7818 driver.go:394] Setting default libvirt URI to qemu:///system
I1204 15:22:23.688099    7818 out.go:177] * Using the qemu2 driver based on existing profile
I1204 15:22:23.695007    7818 start.go:297] selected driver: qemu2
I1204 15:22:23.695011    7818 start.go:901] validating driver "qemu2" against &{Name:functional-014000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:functional-014000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1204 15:22:23.695058    7818 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1204 15:22:23.697577    7818 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1204 15:22:23.697596    7818 cni.go:84] Creating CNI manager for ""
I1204 15:22:23.697620    7818 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1204 15:22:23.697683    7818 start.go:340] cluster config:
{Name:functional-014000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-014000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1204 15:22:23.702257    7818 iso.go:125] acquiring lock: {Name:mk798ab7885b095925d55c843b0600e7ea181045 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1204 15:22:23.709033    7818 out.go:177] * Starting "functional-014000" primary control-plane node in "functional-014000" cluster
I1204 15:22:23.713044    7818 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
I1204 15:22:23.713055    7818 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
I1204 15:22:23.713068    7818 cache.go:56] Caching tarball of preloaded images
I1204 15:22:23.713134    7818 preload.go:172] Found /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I1204 15:22:23.713137    7818 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
I1204 15:22:23.713186    7818 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/functional-014000/config.json ...
I1204 15:22:23.713735    7818 start.go:360] acquireMachinesLock for functional-014000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1204 15:22:23.713784    7818 start.go:364] duration metric: took 44.375µs to acquireMachinesLock for "functional-014000"
I1204 15:22:23.713792    7818 start.go:96] Skipping create...Using existing machine configuration
I1204 15:22:23.713794    7818 fix.go:54] fixHost starting: 
I1204 15:22:23.713914    7818 fix.go:112] recreateIfNeeded on functional-014000: state=Stopped err=<nil>
W1204 15:22:23.713921    7818 fix.go:138] unexpected machine state, will restart: <nil>
I1204 15:22:23.722075    7818 out.go:177] * Restarting existing qemu2 VM for "functional-014000" ...
I1204 15:22:23.726077    7818 qemu.go:418] Using hvf for hardware acceleration
I1204 15:22:23.726112    7818 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/functional-014000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/functional-014000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/functional-014000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:c5:05:9d:48:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/functional-014000/disk.qcow2
I1204 15:22:23.728396    7818 main.go:141] libmachine: STDOUT: 
I1204 15:22:23.728411    7818 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1204 15:22:23.728442    7818 fix.go:56] duration metric: took 14.645958ms for fixHost
I1204 15:22:23.728445    7818 start.go:83] releasing machines lock for "functional-014000", held for 14.658ms
W1204 15:22:23.728451    7818 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1204 15:22:23.728493    7818 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1204 15:22:23.728497    7818 start.go:729] Will try again in 5 seconds ...
I1204 15:22:28.730779    7818 start.go:360] acquireMachinesLock for functional-014000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1204 15:22:28.731150    7818 start.go:364] duration metric: took 295.25µs to acquireMachinesLock for "functional-014000"
I1204 15:22:28.731259    7818 start.go:96] Skipping create...Using existing machine configuration
I1204 15:22:28.731270    7818 fix.go:54] fixHost starting: 
I1204 15:22:28.732001    7818 fix.go:112] recreateIfNeeded on functional-014000: state=Stopped err=<nil>
W1204 15:22:28.732017    7818 fix.go:138] unexpected machine state, will restart: <nil>
I1204 15:22:28.737680    7818 out.go:177] * Restarting existing qemu2 VM for "functional-014000" ...
I1204 15:22:28.745553    7818 qemu.go:418] Using hvf for hardware acceleration
I1204 15:22:28.745759    7818 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/functional-014000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/functional-014000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/functional-014000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:c5:05:9d:48:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/functional-014000/disk.qcow2
I1204 15:22:28.755158    7818 main.go:141] libmachine: STDOUT: 
I1204 15:22:28.755207    7818 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1204 15:22:28.755337    7818 fix.go:56] duration metric: took 24.066708ms for fixHost
I1204 15:22:28.755349    7818 start.go:83] releasing machines lock for "functional-014000", held for 24.185875ms
W1204 15:22:28.755510    7818 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-014000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1204 15:22:28.762541    7818 out.go:201] 
W1204 15:22:28.765661    7818 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1204 15:22:28.765702    7818 out.go:270] * 
W1204 15:22:28.768364    7818 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1204 15:22:28.777549    7818 out.go:201] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-014000 apply -f testdata/invalidsvc.yaml
functional_test.go:2321: (dbg) Non-zero exit: kubectl --context functional-014000 apply -f testdata/invalidsvc.yaml: exit status 1 (27.989125ms)

                                                
                                                
** stderr ** 
	error: context "functional-014000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2323: kubectl --context functional-014000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-014000 --alsologtostderr -v=1]
functional_test.go:918: output didn't produce a URL
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-014000 --alsologtostderr -v=1] ...
functional_test.go:910: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-014000 --alsologtostderr -v=1] stdout:
functional_test.go:910: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-014000 --alsologtostderr -v=1] stderr:
I1204 15:23:09.995092    8130 out.go:345] Setting OutFile to fd 1 ...
I1204 15:23:09.995524    8130 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 15:23:09.995527    8130 out.go:358] Setting ErrFile to fd 2...
I1204 15:23:09.995529    8130 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 15:23:09.995680    8130 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
I1204 15:23:09.995903    8130 mustload.go:65] Loading cluster: functional-014000
I1204 15:23:09.996115    8130 config.go:182] Loaded profile config "functional-014000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1204 15:23:10.000693    8130 out.go:177] * The control-plane node functional-014000 host is not running: state=Stopped
I1204 15:23:10.004636    8130 out.go:177]   To start a cluster, run: "minikube start -p functional-014000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-014000 -n functional-014000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-014000 -n functional-014000: exit status 7 (46.057125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-014000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 status
functional_test.go:854: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 status: exit status 7 (34.59525ms)

                                                
                                                
-- stdout --
	functional-014000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:856: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-014000 status" : exit status 7
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:860: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (34.714209ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:862: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-014000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 status -o json
functional_test.go:872: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 status -o json: exit status 7 (34.974459ms)

                                                
                                                
-- stdout --
	{"Name":"functional-014000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:874: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-014000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-014000 -n functional-014000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-014000 -n functional-014000: exit status 7 (34.183792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-014000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-014000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1627: (dbg) Non-zero exit: kubectl --context functional-014000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (27.109458ms)

                                                
                                                
** stderr ** 
	error: context "functional-014000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1633: failed to create hello-node deployment with this command "kubectl --context functional-014000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-014000 describe po hello-node-connect
functional_test.go:1602: (dbg) Non-zero exit: kubectl --context functional-014000 describe po hello-node-connect: exit status 1 (26.527375ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-014000

                                                
                                                
** /stderr **
functional_test.go:1604: "kubectl --context functional-014000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1606: hello-node pod describe:
functional_test.go:1608: (dbg) Run:  kubectl --context functional-014000 logs -l app=hello-node-connect
functional_test.go:1608: (dbg) Non-zero exit: kubectl --context functional-014000 logs -l app=hello-node-connect: exit status 1 (26.42675ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-014000

                                                
                                                
** /stderr **
functional_test.go:1610: "kubectl --context functional-014000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1612: hello-node logs:
functional_test.go:1614: (dbg) Run:  kubectl --context functional-014000 describe svc hello-node-connect
functional_test.go:1614: (dbg) Non-zero exit: kubectl --context functional-014000 describe svc hello-node-connect: exit status 1 (26.3775ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-014000

                                                
                                                
** /stderr **
functional_test.go:1616: "kubectl --context functional-014000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1618: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-014000 -n functional-014000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-014000 -n functional-014000: exit status 7 (34.726792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-014000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-014000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-014000 -n functional-014000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-014000 -n functional-014000: exit status 7 (35.29225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-014000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 ssh "echo hello"
functional_test.go:1725: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 ssh "echo hello": exit status 83 (45.539625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
functional_test.go:1730: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-014000 ssh \"echo hello\"" : exit status 83
functional_test.go:1734: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-014000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-014000\"\n"*. args "out/minikube-darwin-arm64 -p functional-014000 ssh \"echo hello\""
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 ssh "cat /etc/hostname"
functional_test.go:1742: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 ssh "cat /etc/hostname": exit status 83 (46.84575ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
functional_test.go:1748: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-014000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1752: expected minikube ssh command output to be -"functional-014000"- but got *"* The control-plane node functional-014000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-014000\"\n"*. args "out/minikube-darwin-arm64 -p functional-014000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-014000 -n functional-014000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-014000 -n functional-014000: exit status 7 (35.209042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-014000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (58.306375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-014000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 ssh -n functional-014000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 ssh -n functional-014000 "sudo cat /home/docker/cp-test.txt": exit status 83 (44.798125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-014000 ssh -n functional-014000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-014000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-014000\"\n",
  }, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 cp functional-014000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd3735427970/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 cp functional-014000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd3735427970/001/cp-test.txt: exit status 83 (45.762958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-014000 cp functional-014000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd3735427970/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 ssh -n functional-014000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 ssh -n functional-014000 "sudo cat /home/docker/cp-test.txt": exit status 83 (43.041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-014000 ssh -n functional-014000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd3735427970/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"* The control-plane node functional-014000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-014000\"\n",
+ 	"",
  )
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (46.650625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-014000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 ssh -n functional-014000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 ssh -n functional-014000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (47.844625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-014000 ssh -n functional-014000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-014000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-014000\"\n",
  }, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/7495/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 ssh "sudo cat /etc/test/nested/copy/7495/hosts"
functional_test.go:1931: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 ssh "sudo cat /etc/test/nested/copy/7495/hosts": exit status 83 (42.573ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
functional_test.go:1933: out/minikube-darwin-arm64 -p functional-014000 ssh "sudo cat /etc/test/nested/copy/7495/hosts" failed: exit status 83
functional_test.go:1936: file sync test content: * The control-plane node functional-014000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-014000"
functional_test.go:1946: /etc/sync.test content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-014000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-014000\"\n",
  }, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-014000 -n functional-014000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-014000 -n functional-014000: exit status 7 (34.837791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-014000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/7495.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 ssh "sudo cat /etc/ssl/certs/7495.pem"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 ssh "sudo cat /etc/ssl/certs/7495.pem": exit status 83 (49.417167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/etc/ssl/certs/7495.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-014000 ssh \"sudo cat /etc/ssl/certs/7495.pem\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/7495.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-014000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-014000"
  	"""
  )
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/7495.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 ssh "sudo cat /usr/share/ca-certificates/7495.pem"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 ssh "sudo cat /usr/share/ca-certificates/7495.pem": exit status 83 (42.8835ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/usr/share/ca-certificates/7495.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-014000 ssh \"sudo cat /usr/share/ca-certificates/7495.pem\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/7495.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-014000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-014000"
  	"""
  )
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (47.600334ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-014000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-014000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-014000"
  	"""
  )
functional_test.go:1999: Checking for existence of /etc/ssl/certs/74952.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 ssh "sudo cat /etc/ssl/certs/74952.pem"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 ssh "sudo cat /etc/ssl/certs/74952.pem": exit status 83 (43.542417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/etc/ssl/certs/74952.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-014000 ssh \"sudo cat /etc/ssl/certs/74952.pem\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/74952.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-014000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-014000"
  	"""
  )
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/74952.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 ssh "sudo cat /usr/share/ca-certificates/74952.pem"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 ssh "sudo cat /usr/share/ca-certificates/74952.pem": exit status 83 (44.656542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/usr/share/ca-certificates/74952.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-014000 ssh \"sudo cat /usr/share/ca-certificates/74952.pem\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/74952.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-014000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-014000"
  	"""
  )
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (45.858958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-014000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-014000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-014000"
  	"""
  )
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-014000 -n functional-014000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-014000 -n functional-014000: exit status 7 (36.0615ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-014000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-014000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:219: (dbg) Non-zero exit: kubectl --context functional-014000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (27.187041ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-014000

                                                
                                                
** /stderr **
functional_test.go:221: failed to 'kubectl get nodes' with args "kubectl --context functional-014000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:227: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-014000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-014000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-014000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-014000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-014000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-014000 -n functional-014000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-014000 -n functional-014000: exit status 7 (35.017209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-014000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 ssh "sudo systemctl is-active crio": exit status 83 (42.517584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
functional_test.go:2030: output of 
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2033: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-014000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-014000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 version -o=json --components
functional_test.go:2270: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 version -o=json --components: exit status 83 (46.988792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
functional_test.go:2272: error version: exit status 83
functional_test.go:2277: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-014000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-014000"
functional_test.go:2277: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-014000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-014000"
functional_test.go:2277: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-014000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-014000"
functional_test.go:2277: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-014000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-014000"
functional_test.go:2277: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-014000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-014000"
functional_test.go:2277: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-014000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-014000"
functional_test.go:2277: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-014000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-014000"
functional_test.go:2277: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-014000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-014000"
functional_test.go:2277: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-014000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-014000"
functional_test.go:2277: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-014000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-014000"
--- FAIL: TestFunctional/parallel/Version/components (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-014000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-014000 image ls --format short --alsologtostderr:
I1204 15:23:10.442517    8147 out.go:345] Setting OutFile to fd 1 ...
I1204 15:23:10.442718    8147 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 15:23:10.442722    8147 out.go:358] Setting ErrFile to fd 2...
I1204 15:23:10.442724    8147 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 15:23:10.442854    8147 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
I1204 15:23:10.443272    8147 config.go:182] Loaded profile config "functional-014000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1204 15:23:10.443330    8147 config.go:182] Loaded profile config "functional-014000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test.go:275: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-014000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-014000 image ls --format table --alsologtostderr:
I1204 15:23:10.688910    8159 out.go:345] Setting OutFile to fd 1 ...
I1204 15:23:10.689118    8159 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 15:23:10.689121    8159 out.go:358] Setting ErrFile to fd 2...
I1204 15:23:10.689124    8159 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 15:23:10.689265    8159 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
I1204 15:23:10.689685    8159 config.go:182] Loaded profile config "functional-014000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1204 15:23:10.689748    8159 config.go:182] Loaded profile config "functional-014000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test.go:275: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
I1204 15:23:13.035167    7495 retry.go:31] will retry after 29.792638681s: Temporary Error: Get "http:": http: no Host in request URL
I1204 15:23:42.830490    7495 retry.go:31] will retry after 34.998183273s: Temporary Error: Get "http:": http: no Host in request URL
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-014000 image ls --format json --alsologtostderr:
[]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-014000 image ls --format json --alsologtostderr:
I1204 15:23:10.648038    8157 out.go:345] Setting OutFile to fd 1 ...
I1204 15:23:10.648217    8157 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 15:23:10.648220    8157 out.go:358] Setting ErrFile to fd 2...
I1204 15:23:10.648223    8157 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 15:23:10.648359    8157 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
I1204 15:23:10.648805    8157 config.go:182] Loaded profile config "functional-014000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1204 15:23:10.648865    8157 config.go:182] Loaded profile config "functional-014000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test.go:275: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-014000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-014000 image ls --format yaml --alsologtostderr:
I1204 15:23:10.481949    8149 out.go:345] Setting OutFile to fd 1 ...
I1204 15:23:10.482107    8149 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 15:23:10.482110    8149 out.go:358] Setting ErrFile to fd 2...
I1204 15:23:10.482113    8149 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 15:23:10.482250    8149 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
I1204 15:23:10.482751    8149 config.go:182] Loaded profile config "functional-014000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1204 15:23:10.482810    8149 config.go:182] Loaded profile config "functional-014000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test.go:275: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 ssh pgrep buildkitd: exit status 83 (44.8765ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 image build -t localhost/my-image:functional-014000 testdata/build --alsologtostderr
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-014000 image build -t localhost/my-image:functional-014000 testdata/build --alsologtostderr:
I1204 15:23:10.566769    8153 out.go:345] Setting OutFile to fd 1 ...
I1204 15:23:10.567292    8153 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 15:23:10.567295    8153 out.go:358] Setting ErrFile to fd 2...
I1204 15:23:10.567298    8153 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 15:23:10.567447    8153 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
I1204 15:23:10.567847    8153 config.go:182] Loaded profile config "functional-014000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1204 15:23:10.568305    8153 config.go:182] Loaded profile config "functional-014000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1204 15:23:10.568576    8153 build_images.go:133] succeeded building to: 
I1204 15:23:10.568579    8153 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 image ls
functional_test.go:446: expected "localhost/my-image:functional-014000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-014000 docker-env) && out/minikube-darwin-arm64 status -p functional-014000"
functional_test.go:499: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-014000 docker-env) && out/minikube-darwin-arm64 status -p functional-014000": exit status 1 (46.821542ms)
functional_test.go:505: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 update-context --alsologtostderr -v=2: exit status 83 (52.700667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:23:10.294056    8139 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:23:10.294852    8139 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:23:10.294855    8139 out.go:358] Setting ErrFile to fd 2...
	I1204 15:23:10.294858    8139 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:23:10.294984    8139 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:23:10.295176    8139 mustload.go:65] Loading cluster: functional-014000
	I1204 15:23:10.295374    8139 config.go:182] Loaded profile config "functional-014000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:23:10.299907    8139 out.go:177] * The control-plane node functional-014000 host is not running: state=Stopped
	I1204 15:23:10.307914    8139 out.go:177]   To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-014000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-014000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-014000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 update-context --alsologtostderr -v=2: exit status 83 (47.481666ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:23:10.394359    8145 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:23:10.394550    8145 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:23:10.394553    8145 out.go:358] Setting ErrFile to fd 2...
	I1204 15:23:10.394555    8145 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:23:10.394708    8145 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:23:10.394935    8145 mustload.go:65] Loading cluster: functional-014000
	I1204 15:23:10.395150    8145 config.go:182] Loaded profile config "functional-014000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:23:10.399938    8145 out.go:177] * The control-plane node functional-014000 host is not running: state=Stopped
	I1204 15:23:10.403938    8145 out.go:177]   To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-014000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-014000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-014000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 update-context --alsologtostderr -v=2: exit status 83 (47.757458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:23:10.346520    8142 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:23:10.346725    8142 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:23:10.346729    8142 out.go:358] Setting ErrFile to fd 2...
	I1204 15:23:10.346731    8142 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:23:10.346883    8142 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:23:10.347088    8142 mustload.go:65] Loading cluster: functional-014000
	I1204 15:23:10.347296    8142 config.go:182] Loaded profile config "functional-014000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:23:10.351785    8142 out.go:177] * The control-plane node functional-014000 host is not running: state=Stopped
	I1204 15:23:10.355897    8142 out.go:177]   To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-014000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-014000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-014000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-014000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1437: (dbg) Non-zero exit: kubectl --context functional-014000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (27.5835ms)

                                                
                                                
** stderr ** 
	error: context "functional-014000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1443: failed to create hello-node deployment with this command "kubectl --context functional-014000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 service list
functional_test.go:1459: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 service list: exit status 83 (47.167292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
functional_test.go:1461: failed to do service list. args "out/minikube-darwin-arm64 -p functional-014000 service list" : exit status 83
functional_test.go:1464: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-014000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-014000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 service list -o json
functional_test.go:1489: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 service list -o json: exit status 83 (51.823666ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
functional_test.go:1491: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-014000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 service --namespace=default --https --url hello-node
functional_test.go:1509: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 service --namespace=default --https --url hello-node: exit status 83 (47.931791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
functional_test.go:1511: failed to get service url. args "out/minikube-darwin-arm64 -p functional-014000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 service hello-node --url --format={{.IP}}
functional_test.go:1540: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 service hello-node --url --format={{.IP}}: exit status 83 (46.777917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
functional_test.go:1542: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-014000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1548: "* The control-plane node functional-014000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-014000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 service hello-node --url
functional_test.go:1559: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 service hello-node --url: exit status 83 (46.800791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
functional_test.go:1561: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-014000 service hello-node --url": exit status 83
functional_test.go:1565: found endpoint for hello-node: * The control-plane node functional-014000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-014000"
functional_test.go:1569: failed to parse "* The control-plane node functional-014000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-014000\"": parse "* The control-plane node functional-014000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-014000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-014000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-014000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I1204 15:22:30.737859    7935 out.go:345] Setting OutFile to fd 1 ...
I1204 15:22:30.738092    7935 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 15:22:30.738095    7935 out.go:358] Setting ErrFile to fd 2...
I1204 15:22:30.738097    7935 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 15:22:30.738293    7935 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
I1204 15:22:30.738501    7935 mustload.go:65] Loading cluster: functional-014000
I1204 15:22:30.738726    7935 config.go:182] Loaded profile config "functional-014000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1204 15:22:30.743328    7935 out.go:177] * The control-plane node functional-014000 host is not running: state=Stopped
I1204 15:22:30.751351    7935 out.go:177]   To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
stdout: * The control-plane node functional-014000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-014000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-014000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-014000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-014000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-014000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 7936: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-014000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-014000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-014000": client config: context "functional-014000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (107.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I1204 15:22:30.814829    7495 retry.go:31] will retry after 2.551597081s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-014000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-014000 get svc nginx-svc: exit status 1 (70.494417ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-014000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-014000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (107.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 image load --daemon kicbase/echo-server:functional-014000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-014000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 image load --daemon kicbase/echo-server:functional-014000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-014000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-014000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 image load --daemon kicbase/echo-server:functional-014000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-014000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 image save kicbase/echo-server:functional-014000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:386: expected "/Users/jenkins/workspace/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-014000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
I1204 15:24:17.918900    7495 config.go:182] Loaded profile config "functional-014000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.035053167s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 10 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (30.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
I1204 15:24:43.061009    7495 config.go:182] Loaded profile config "functional-014000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1204 15:24:53.063411    7495 retry.go:31] will retry after 3.17980044s: Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I1204 15:25:06.246410    7495 retry.go:31] will retry after 5.485068597s: Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": dial tcp: lookup nginx-svc.default.svc.cluster.local. on 8.8.8.8:53: read udp 207.254.73.72:56642->10.96.0.10:53: i/o timeout
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (30.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (10.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-310000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-310000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (9.943128125s)

                                                
                                                
-- stdout --
	* [ha-310000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-310000" primary control-plane node in "ha-310000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-310000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:25:13.474297    8227 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:25:13.474436    8227 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:25:13.474440    8227 out.go:358] Setting ErrFile to fd 2...
	I1204 15:25:13.474442    8227 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:25:13.474598    8227 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:25:13.475774    8227 out.go:352] Setting JSON to false
	I1204 15:25:13.493659    8227 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5083,"bootTime":1733349630,"procs":543,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 15:25:13.493727    8227 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:25:13.501346    8227 out.go:177] * [ha-310000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 15:25:13.509303    8227 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 15:25:13.509353    8227 notify.go:220] Checking for updates...
	I1204 15:25:13.517284    8227 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	I1204 15:25:13.520204    8227 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 15:25:13.524259    8227 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:25:13.527343    8227 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	I1204 15:25:13.530252    8227 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 15:25:13.533469    8227 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:25:13.537428    8227 out.go:177] * Using the qemu2 driver based on user configuration
	I1204 15:25:13.544269    8227 start.go:297] selected driver: qemu2
	I1204 15:25:13.544275    8227 start.go:901] validating driver "qemu2" against <nil>
	I1204 15:25:13.544283    8227 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 15:25:13.546901    8227 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 15:25:13.551286    8227 out.go:177] * Automatically selected the socket_vmnet network
	I1204 15:25:13.554317    8227 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 15:25:13.554332    8227 cni.go:84] Creating CNI manager for ""
	I1204 15:25:13.554351    8227 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1204 15:25:13.554358    8227 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1204 15:25:13.554395    8227 start.go:340] cluster config:
	{Name:ha-310000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-310000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:25:13.559051    8227 iso.go:125] acquiring lock: {Name:mk798ab7885b095925d55c843b0600e7ea181045 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:25:13.566242    8227 out.go:177] * Starting "ha-310000" primary control-plane node in "ha-310000" cluster
	I1204 15:25:13.570278    8227 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:25:13.570292    8227 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 15:25:13.570299    8227 cache.go:56] Caching tarball of preloaded images
	I1204 15:25:13.570372    8227 preload.go:172] Found /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 15:25:13.570378    8227 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 15:25:13.570605    8227 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/ha-310000/config.json ...
	I1204 15:25:13.570617    8227 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/ha-310000/config.json: {Name:mk6b3ab9e02c711f3c29dfa0578ce1c892c170b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:25:13.571097    8227 start.go:360] acquireMachinesLock for ha-310000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:25:13.571148    8227 start.go:364] duration metric: took 44.709µs to acquireMachinesLock for "ha-310000"
	I1204 15:25:13.571162    8227 start.go:93] Provisioning new machine with config: &{Name:ha-310000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.2 ClusterName:ha-310000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:25:13.571193    8227 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 15:25:13.579272    8227 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 15:25:13.597266    8227 start.go:159] libmachine.API.Create for "ha-310000" (driver="qemu2")
	I1204 15:25:13.597295    8227 client.go:168] LocalClient.Create starting
	I1204 15:25:13.597375    8227 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem
	I1204 15:25:13.597414    8227 main.go:141] libmachine: Decoding PEM data...
	I1204 15:25:13.597424    8227 main.go:141] libmachine: Parsing certificate...
	I1204 15:25:13.597465    8227 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem
	I1204 15:25:13.597497    8227 main.go:141] libmachine: Decoding PEM data...
	I1204 15:25:13.597507    8227 main.go:141] libmachine: Parsing certificate...
	I1204 15:25:13.597917    8227 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-6982/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 15:25:13.757474    8227 main.go:141] libmachine: Creating SSH key...
	I1204 15:25:13.815077    8227 main.go:141] libmachine: Creating Disk image...
	I1204 15:25:13.815083    8227 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 15:25:13.815275    8227 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/ha-310000/disk.qcow2.raw /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/ha-310000/disk.qcow2
	I1204 15:25:13.825052    8227 main.go:141] libmachine: STDOUT: 
	I1204 15:25:13.825068    8227 main.go:141] libmachine: STDERR: 
	I1204 15:25:13.825121    8227 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/ha-310000/disk.qcow2 +20000M
	I1204 15:25:13.833535    8227 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 15:25:13.833551    8227 main.go:141] libmachine: STDERR: 
	I1204 15:25:13.833565    8227 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/ha-310000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/ha-310000/disk.qcow2
	I1204 15:25:13.833569    8227 main.go:141] libmachine: Starting QEMU VM...
	I1204 15:25:13.833578    8227 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:25:13.833614    8227 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/ha-310000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/ha-310000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/ha-310000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:f7:c1:3a:d1:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/ha-310000/disk.qcow2
	I1204 15:25:13.835419    8227 main.go:141] libmachine: STDOUT: 
	I1204 15:25:13.835434    8227 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:25:13.835452    8227 client.go:171] duration metric: took 238.147708ms to LocalClient.Create
	I1204 15:25:15.837646    8227 start.go:128] duration metric: took 2.266407167s to createHost
	I1204 15:25:15.837726    8227 start.go:83] releasing machines lock for "ha-310000", held for 2.26654775s
	W1204 15:25:15.837796    8227 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:25:15.853228    8227 out.go:177] * Deleting "ha-310000" in qemu2 ...
	W1204 15:25:15.880832    8227 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:25:15.880876    8227 start.go:729] Will try again in 5 seconds ...
	I1204 15:25:20.883178    8227 start.go:360] acquireMachinesLock for ha-310000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:25:20.883829    8227 start.go:364] duration metric: took 528.25µs to acquireMachinesLock for "ha-310000"
	I1204 15:25:20.883971    8227 start.go:93] Provisioning new machine with config: &{Name:ha-310000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.2 ClusterName:ha-310000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:25:20.884290    8227 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 15:25:20.903211    8227 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 15:25:20.953007    8227 start.go:159] libmachine.API.Create for "ha-310000" (driver="qemu2")
	I1204 15:25:20.953124    8227 client.go:168] LocalClient.Create starting
	I1204 15:25:20.953288    8227 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem
	I1204 15:25:20.953375    8227 main.go:141] libmachine: Decoding PEM data...
	I1204 15:25:20.953399    8227 main.go:141] libmachine: Parsing certificate...
	I1204 15:25:20.953479    8227 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem
	I1204 15:25:20.953536    8227 main.go:141] libmachine: Decoding PEM data...
	I1204 15:25:20.953550    8227 main.go:141] libmachine: Parsing certificate...
	I1204 15:25:20.954403    8227 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-6982/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 15:25:21.126141    8227 main.go:141] libmachine: Creating SSH key...
	I1204 15:25:21.313691    8227 main.go:141] libmachine: Creating Disk image...
	I1204 15:25:21.313698    8227 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 15:25:21.313937    8227 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/ha-310000/disk.qcow2.raw /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/ha-310000/disk.qcow2
	I1204 15:25:21.324341    8227 main.go:141] libmachine: STDOUT: 
	I1204 15:25:21.324357    8227 main.go:141] libmachine: STDERR: 
	I1204 15:25:21.324423    8227 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/ha-310000/disk.qcow2 +20000M
	I1204 15:25:21.332956    8227 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 15:25:21.332972    8227 main.go:141] libmachine: STDERR: 
	I1204 15:25:21.332982    8227 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/ha-310000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/ha-310000/disk.qcow2
	I1204 15:25:21.332986    8227 main.go:141] libmachine: Starting QEMU VM...
	I1204 15:25:21.332993    8227 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:25:21.333023    8227 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/ha-310000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/ha-310000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/ha-310000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:43:12:47:f1:0b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/ha-310000/disk.qcow2
	I1204 15:25:21.334898    8227 main.go:141] libmachine: STDOUT: 
	I1204 15:25:21.334918    8227 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:25:21.334933    8227 client.go:171] duration metric: took 381.786625ms to LocalClient.Create
	I1204 15:25:23.337120    8227 start.go:128] duration metric: took 2.452777959s to createHost
	I1204 15:25:23.337247    8227 start.go:83] releasing machines lock for "ha-310000", held for 2.453310916s
	W1204 15:25:23.337543    8227 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-310000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-310000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:25:23.354254    8227 out.go:201] 
	W1204 15:25:23.357286    8227 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 15:25:23.357314    8227 out.go:270] * 
	* 
	W1204 15:25:23.360181    8227 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 15:25:23.370178    8227 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-310000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-310000 -n ha-310000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-310000 -n ha-310000: exit status 7 (71.911167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-310000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (10.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (104.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-310000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-310000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (64.440333ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-310000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-310000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-310000 -- rollout status deployment/busybox: exit status 1 (62.242041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-310000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-310000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-310000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (62.216917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-310000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1204 15:25:23.648547    7495 retry.go:31] will retry after 682.161731ms: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-310000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-310000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.141167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-310000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1204 15:25:24.441298    7495 retry.go:31] will retry after 1.703691942s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-310000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-310000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (111.722667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-310000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1204 15:25:26.259185    7495 retry.go:31] will retry after 2.599090439s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-310000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-310000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.013667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-310000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1204 15:25:28.969835    7495 retry.go:31] will retry after 2.301879632s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-310000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-310000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.923667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-310000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1204 15:25:31.382092    7495 retry.go:31] will retry after 3.30644622s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-310000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-310000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.981875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-310000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1204 15:25:34.797922    7495 retry.go:31] will retry after 10.749211204s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-310000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-310000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.88425ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-310000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1204 15:25:45.658523    7495 retry.go:31] will retry after 16.949294863s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-310000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-310000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.892ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-310000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1204 15:26:02.718100    7495 retry.go:31] will retry after 10.923073055s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-310000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-310000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.958584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-310000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1204 15:26:13.752178    7495 retry.go:31] will retry after 15.196270992s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-310000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-310000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.964333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-310000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1204 15:26:29.061088    7495 retry.go:31] will retry after 38.176110447s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-310000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-310000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.530208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-310000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-310000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-310000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (62.299875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-310000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-310000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-310000 -- exec  -- nslookup kubernetes.io: exit status 1 (62.995917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-310000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-310000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-310000 -- exec  -- nslookup kubernetes.default: exit status 1 (62.054542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-310000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-310000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-310000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (62.26475ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-310000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-310000 -n ha-310000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-310000 -n ha-310000: exit status 7 (35.072542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-310000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (104.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-310000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-310000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (61.625208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-310000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-310000 -n ha-310000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-310000 -n ha-310000: exit status 7 (35.179958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-310000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-310000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-310000 -v=7 --alsologtostderr: exit status 83 (45.144875ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-310000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-310000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:27:07.763775    8350 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:27:07.764195    8350 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:27:07.764198    8350 out.go:358] Setting ErrFile to fd 2...
	I1204 15:27:07.764201    8350 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:27:07.764382    8350 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:27:07.764635    8350 mustload.go:65] Loading cluster: ha-310000
	I1204 15:27:07.764872    8350 config.go:182] Loaded profile config "ha-310000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:27:07.768887    8350 out.go:177] * The control-plane node ha-310000 host is not running: state=Stopped
	I1204 15:27:07.771776    8350 out.go:177]   To start a cluster, run: "minikube start -p ha-310000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-310000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-310000 -n ha-310000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-310000 -n ha-310000: exit status 7 (34.822292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-310000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-310000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-310000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (27.254916ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-310000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-310000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-310000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-310000 -n ha-310000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-310000 -n ha-310000: exit status 7 (35.195625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-310000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-310000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-310000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-310000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-310000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerR
untime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSH
AgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-310000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-310000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-310000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-310000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\
",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSoc
k\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-310000 -n ha-310000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-310000 -n ha-310000: exit status 7 (34.565459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-310000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-darwin-arm64 -p ha-310000 status --output json -v=7 --alsologtostderr
ha_test.go:328: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-310000 status --output json -v=7 --alsologtostderr: exit status 7 (34.715875ms)

                                                
                                                
-- stdout --
	{"Name":"ha-310000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:27:07.994628    8362 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:27:07.994815    8362 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:27:07.994818    8362 out.go:358] Setting ErrFile to fd 2...
	I1204 15:27:07.994820    8362 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:27:07.994950    8362 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:27:07.995082    8362 out.go:352] Setting JSON to true
	I1204 15:27:07.995094    8362 mustload.go:65] Loading cluster: ha-310000
	I1204 15:27:07.995148    8362 notify.go:220] Checking for updates...
	I1204 15:27:07.995311    8362 config.go:182] Loaded profile config "ha-310000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:27:07.995318    8362 status.go:174] checking status of ha-310000 ...
	I1204 15:27:07.995570    8362 status.go:371] ha-310000 host status = "Stopped" (err=<nil>)
	I1204 15:27:07.995573    8362 status.go:384] host is not running, skipping remaining checks
	I1204 15:27:07.995575    8362 status.go:176] ha-310000 status: &{Name:ha-310000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:335: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-310000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-310000 -n ha-310000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-310000 -n ha-310000: exit status 7 (34.864708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-310000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p ha-310000 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-310000 node stop m02 -v=7 --alsologtostderr: exit status 85 (52.53525ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:27:08.065426    8366 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:27:08.066054    8366 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:27:08.066057    8366 out.go:358] Setting ErrFile to fd 2...
	I1204 15:27:08.066059    8366 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:27:08.066243    8366 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:27:08.066503    8366 mustload.go:65] Loading cluster: ha-310000
	I1204 15:27:08.066723    8366 config.go:182] Loaded profile config "ha-310000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:27:08.070826    8366 out.go:201] 
	W1204 15:27:08.073802    8366 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1204 15:27:08.073806    8366 out.go:270] * 
	* 
	W1204 15:27:08.075618    8366 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 15:27:08.079668    8366 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-310000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:371: (dbg) Run:  out/minikube-darwin-arm64 -p ha-310000 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-310000 status -v=7 --alsologtostderr: exit status 7 (35.112167ms)

                                                
                                                
-- stdout --
	ha-310000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:27:08.117970    8368 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:27:08.118197    8368 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:27:08.118200    8368 out.go:358] Setting ErrFile to fd 2...
	I1204 15:27:08.118203    8368 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:27:08.118348    8368 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:27:08.118486    8368 out.go:352] Setting JSON to false
	I1204 15:27:08.118502    8368 mustload.go:65] Loading cluster: ha-310000
	I1204 15:27:08.118551    8368 notify.go:220] Checking for updates...
	I1204 15:27:08.118721    8368 config.go:182] Loaded profile config "ha-310000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:27:08.118728    8368 status.go:174] checking status of ha-310000 ...
	I1204 15:27:08.118990    8368 status.go:371] ha-310000 host status = "Stopped" (err=<nil>)
	I1204 15:27:08.118993    8368 status.go:384] host is not running, skipping remaining checks
	I1204 15:27:08.118996    8368 status.go:176] ha-310000 status: &{Name:ha-310000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-310000 status -v=7 --alsologtostderr": ha-310000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:380: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-310000 status -v=7 --alsologtostderr": ha-310000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:383: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-310000 status -v=7 --alsologtostderr": ha-310000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:386: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-310000 status -v=7 --alsologtostderr": ha-310000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-310000 -n ha-310000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-310000 -n ha-310000: exit status 7 (37.4365ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-310000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-310000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-310000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-310000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-310000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31
.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuth
Sock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-310000 -n ha-310000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-310000 -n ha-310000: exit status 7 (34.847166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-310000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (52.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p ha-310000 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-310000 node start m02 -v=7 --alsologtostderr: exit status 85 (48.300833ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:27:08.278890    8377 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:27:08.279337    8377 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:27:08.279341    8377 out.go:358] Setting ErrFile to fd 2...
	I1204 15:27:08.279343    8377 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:27:08.279530    8377 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:27:08.279749    8377 mustload.go:65] Loading cluster: ha-310000
	I1204 15:27:08.279937    8377 config.go:182] Loaded profile config "ha-310000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:27:08.282852    8377 out.go:201] 
	W1204 15:27:08.285820    8377 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1204 15:27:08.285824    8377 out.go:270] * 
	* 
	W1204 15:27:08.287399    8377 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 15:27:08.291825    8377 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:424: I1204 15:27:08.278890    8377 out.go:345] Setting OutFile to fd 1 ...
I1204 15:27:08.279337    8377 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 15:27:08.279341    8377 out.go:358] Setting ErrFile to fd 2...
I1204 15:27:08.279343    8377 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 15:27:08.279530    8377 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
I1204 15:27:08.279749    8377 mustload.go:65] Loading cluster: ha-310000
I1204 15:27:08.279937    8377 config.go:182] Loaded profile config "ha-310000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1204 15:27:08.282852    8377 out.go:201] 
W1204 15:27:08.285820    8377 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W1204 15:27:08.285824    8377 out.go:270] * 
* 
W1204 15:27:08.287399    8377 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1204 15:27:08.291825    8377 out.go:201] 

                                                
                                                
ha_test.go:425: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-310000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-310000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-310000 status -v=7 --alsologtostderr: exit status 7 (34.435208ms)

                                                
                                                
-- stdout --
	ha-310000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:27:08.326905    8379 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:27:08.327110    8379 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:27:08.327113    8379 out.go:358] Setting ErrFile to fd 2...
	I1204 15:27:08.327116    8379 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:27:08.327235    8379 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:27:08.327349    8379 out.go:352] Setting JSON to false
	I1204 15:27:08.327364    8379 mustload.go:65] Loading cluster: ha-310000
	I1204 15:27:08.327438    8379 notify.go:220] Checking for updates...
	I1204 15:27:08.327584    8379 config.go:182] Loaded profile config "ha-310000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:27:08.327592    8379 status.go:174] checking status of ha-310000 ...
	I1204 15:27:08.327854    8379 status.go:371] ha-310000 host status = "Stopped" (err=<nil>)
	I1204 15:27:08.327858    8379 status.go:384] host is not running, skipping remaining checks
	I1204 15:27:08.327860    8379 status.go:176] ha-310000 status: &{Name:ha-310000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1204 15:27:08.328779    7495 retry.go:31] will retry after 1.474507805s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-310000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-310000 status -v=7 --alsologtostderr: exit status 7 (78.283125ms)

                                                
                                                
-- stdout --
	ha-310000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:27:09.881790    8381 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:27:09.882016    8381 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:27:09.882021    8381 out.go:358] Setting ErrFile to fd 2...
	I1204 15:27:09.882024    8381 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:27:09.882192    8381 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:27:09.882346    8381 out.go:352] Setting JSON to false
	I1204 15:27:09.882361    8381 mustload.go:65] Loading cluster: ha-310000
	I1204 15:27:09.882432    8381 notify.go:220] Checking for updates...
	I1204 15:27:09.882628    8381 config.go:182] Loaded profile config "ha-310000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:27:09.882637    8381 status.go:174] checking status of ha-310000 ...
	I1204 15:27:09.882941    8381 status.go:371] ha-310000 host status = "Stopped" (err=<nil>)
	I1204 15:27:09.882945    8381 status.go:384] host is not running, skipping remaining checks
	I1204 15:27:09.882947    8381 status.go:176] ha-310000 status: &{Name:ha-310000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1204 15:27:09.883947    7495 retry.go:31] will retry after 2.146065995s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-310000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-310000 status -v=7 --alsologtostderr: exit status 7 (79.029208ms)

                                                
                                                
-- stdout --
	ha-310000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:27:12.109320    8385 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:27:12.109558    8385 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:27:12.109562    8385 out.go:358] Setting ErrFile to fd 2...
	I1204 15:27:12.109566    8385 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:27:12.109728    8385 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:27:12.109886    8385 out.go:352] Setting JSON to false
	I1204 15:27:12.109903    8385 mustload.go:65] Loading cluster: ha-310000
	I1204 15:27:12.109936    8385 notify.go:220] Checking for updates...
	I1204 15:27:12.110155    8385 config.go:182] Loaded profile config "ha-310000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:27:12.110165    8385 status.go:174] checking status of ha-310000 ...
	I1204 15:27:12.110469    8385 status.go:371] ha-310000 host status = "Stopped" (err=<nil>)
	I1204 15:27:12.110473    8385 status.go:384] host is not running, skipping remaining checks
	I1204 15:27:12.110476    8385 status.go:176] ha-310000 status: &{Name:ha-310000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1204 15:27:12.111551    7495 retry.go:31] will retry after 3.081616615s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-310000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-310000 status -v=7 --alsologtostderr: exit status 7 (78.55075ms)

                                                
                                                
-- stdout --
	ha-310000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:27:15.270133    8387 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:27:15.270345    8387 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:27:15.270349    8387 out.go:358] Setting ErrFile to fd 2...
	I1204 15:27:15.270352    8387 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:27:15.270517    8387 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:27:15.270659    8387 out.go:352] Setting JSON to false
	I1204 15:27:15.270672    8387 mustload.go:65] Loading cluster: ha-310000
	I1204 15:27:15.270702    8387 notify.go:220] Checking for updates...
	I1204 15:27:15.270931    8387 config.go:182] Loaded profile config "ha-310000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:27:15.270940    8387 status.go:174] checking status of ha-310000 ...
	I1204 15:27:15.271240    8387 status.go:371] ha-310000 host status = "Stopped" (err=<nil>)
	I1204 15:27:15.271245    8387 status.go:384] host is not running, skipping remaining checks
	I1204 15:27:15.271248    8387 status.go:176] ha-310000 status: &{Name:ha-310000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1204 15:27:15.272368    7495 retry.go:31] will retry after 4.183301258s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-310000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-310000 status -v=7 --alsologtostderr: exit status 7 (79.005042ms)

                                                
                                                
-- stdout --
	ha-310000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:27:19.535018    8392 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:27:19.535265    8392 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:27:19.535270    8392 out.go:358] Setting ErrFile to fd 2...
	I1204 15:27:19.535273    8392 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:27:19.535437    8392 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:27:19.535596    8392 out.go:352] Setting JSON to false
	I1204 15:27:19.535611    8392 mustload.go:65] Loading cluster: ha-310000
	I1204 15:27:19.535645    8392 notify.go:220] Checking for updates...
	I1204 15:27:19.535862    8392 config.go:182] Loaded profile config "ha-310000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:27:19.535873    8392 status.go:174] checking status of ha-310000 ...
	I1204 15:27:19.536167    8392 status.go:371] ha-310000 host status = "Stopped" (err=<nil>)
	I1204 15:27:19.536171    8392 status.go:384] host is not running, skipping remaining checks
	I1204 15:27:19.536174    8392 status.go:176] ha-310000 status: &{Name:ha-310000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1204 15:27:19.537175    7495 retry.go:31] will retry after 7.152051203s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-310000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-310000 status -v=7 --alsologtostderr: exit status 7 (78.626541ms)

                                                
                                                
-- stdout --
	ha-310000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:27:26.768313    8400 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:27:26.768550    8400 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:27:26.768554    8400 out.go:358] Setting ErrFile to fd 2...
	I1204 15:27:26.768557    8400 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:27:26.768740    8400 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:27:26.768886    8400 out.go:352] Setting JSON to false
	I1204 15:27:26.768899    8400 mustload.go:65] Loading cluster: ha-310000
	I1204 15:27:26.768933    8400 notify.go:220] Checking for updates...
	I1204 15:27:26.769143    8400 config.go:182] Loaded profile config "ha-310000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:27:26.769152    8400 status.go:174] checking status of ha-310000 ...
	I1204 15:27:26.769433    8400 status.go:371] ha-310000 host status = "Stopped" (err=<nil>)
	I1204 15:27:26.769437    8400 status.go:384] host is not running, skipping remaining checks
	I1204 15:27:26.769440    8400 status.go:176] ha-310000 status: &{Name:ha-310000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1204 15:27:26.770439    7495 retry.go:31] will retry after 7.469987609s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-310000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-310000 status -v=7 --alsologtostderr: exit status 7 (76.340083ms)

                                                
                                                
-- stdout --
	ha-310000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:27:34.317172    8408 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:27:34.317420    8408 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:27:34.317425    8408 out.go:358] Setting ErrFile to fd 2...
	I1204 15:27:34.317428    8408 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:27:34.317596    8408 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:27:34.317761    8408 out.go:352] Setting JSON to false
	I1204 15:27:34.317775    8408 mustload.go:65] Loading cluster: ha-310000
	I1204 15:27:34.317824    8408 notify.go:220] Checking for updates...
	I1204 15:27:34.318032    8408 config.go:182] Loaded profile config "ha-310000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:27:34.318041    8408 status.go:174] checking status of ha-310000 ...
	I1204 15:27:34.318348    8408 status.go:371] ha-310000 host status = "Stopped" (err=<nil>)
	I1204 15:27:34.318352    8408 status.go:384] host is not running, skipping remaining checks
	I1204 15:27:34.318355    8408 status.go:176] ha-310000 status: &{Name:ha-310000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1204 15:27:34.319348    7495 retry.go:31] will retry after 8.132637036s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-310000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-310000 status -v=7 --alsologtostderr: exit status 7 (78.809458ms)

                                                
                                                
-- stdout --
	ha-310000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:27:42.531130    8414 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:27:42.531359    8414 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:27:42.531363    8414 out.go:358] Setting ErrFile to fd 2...
	I1204 15:27:42.531366    8414 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:27:42.531526    8414 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:27:42.531672    8414 out.go:352] Setting JSON to false
	I1204 15:27:42.531686    8414 mustload.go:65] Loading cluster: ha-310000
	I1204 15:27:42.531727    8414 notify.go:220] Checking for updates...
	I1204 15:27:42.531937    8414 config.go:182] Loaded profile config "ha-310000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:27:42.531946    8414 status.go:174] checking status of ha-310000 ...
	I1204 15:27:42.532244    8414 status.go:371] ha-310000 host status = "Stopped" (err=<nil>)
	I1204 15:27:42.532249    8414 status.go:384] host is not running, skipping remaining checks
	I1204 15:27:42.532251    8414 status.go:176] ha-310000 status: &{Name:ha-310000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1204 15:27:42.533284    7495 retry.go:31] will retry after 18.342763313s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-310000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-310000 status -v=7 --alsologtostderr: exit status 7 (77.916125ms)

                                                
                                                
-- stdout --
	ha-310000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:28:00.954552    8433 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:28:00.954763    8433 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:28:00.954767    8433 out.go:358] Setting ErrFile to fd 2...
	I1204 15:28:00.954770    8433 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:28:00.954937    8433 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:28:00.955079    8433 out.go:352] Setting JSON to false
	I1204 15:28:00.955092    8433 mustload.go:65] Loading cluster: ha-310000
	I1204 15:28:00.955141    8433 notify.go:220] Checking for updates...
	I1204 15:28:00.955349    8433 config.go:182] Loaded profile config "ha-310000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:28:00.955359    8433 status.go:174] checking status of ha-310000 ...
	I1204 15:28:00.955645    8433 status.go:371] ha-310000 host status = "Stopped" (err=<nil>)
	I1204 15:28:00.955650    8433 status.go:384] host is not running, skipping remaining checks
	I1204 15:28:00.955652    8433 status.go:176] ha-310000 status: &{Name:ha-310000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:434: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-310000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-310000 -n ha-310000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-310000 -n ha-310000: exit status 7 (36.679791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-310000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (52.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-310000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-310000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-310000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-310000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerR
untime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSH
AgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-310000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-310000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-310000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-310000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\
",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSoc
k\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-310000 -n ha-310000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-310000 -n ha-310000: exit status 7 (34.805292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-310000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (9.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-310000 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-310000 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-310000 -v=7 --alsologtostderr: (3.806998417s)
ha_test.go:469: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-310000 --wait=true -v=7 --alsologtostderr
ha_test.go:469: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-310000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.237833542s)

                                                
                                                
-- stdout --
	* [ha-310000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-310000" primary control-plane node in "ha-310000" cluster
	* Restarting existing qemu2 VM for "ha-310000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-310000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:28:04.999563    8466 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:28:04.999769    8466 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:28:04.999773    8466 out.go:358] Setting ErrFile to fd 2...
	I1204 15:28:04.999776    8466 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:28:04.999955    8466 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:28:05.001198    8466 out.go:352] Setting JSON to false
	I1204 15:28:05.021281    8466 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5255,"bootTime":1733349630,"procs":545,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 15:28:05.021359    8466 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:28:05.026238    8466 out.go:177] * [ha-310000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 15:28:05.033204    8466 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 15:28:05.033248    8466 notify.go:220] Checking for updates...
	I1204 15:28:05.040121    8466 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	I1204 15:28:05.043116    8466 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 15:28:05.046150    8466 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:28:05.047501    8466 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	I1204 15:28:05.050126    8466 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 15:28:05.053542    8466 config.go:182] Loaded profile config "ha-310000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:28:05.053616    8466 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:28:05.057998    8466 out.go:177] * Using the qemu2 driver based on existing profile
	I1204 15:28:05.065187    8466 start.go:297] selected driver: qemu2
	I1204 15:28:05.065196    8466 start.go:901] validating driver "qemu2" against &{Name:ha-310000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.2 ClusterName:ha-310000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:28:05.065257    8466 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 15:28:05.067883    8466 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 15:28:05.067907    8466 cni.go:84] Creating CNI manager for ""
	I1204 15:28:05.067935    8466 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1204 15:28:05.067993    8466 start.go:340] cluster config:
	{Name:ha-310000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-310000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:28:05.072545    8466 iso.go:125] acquiring lock: {Name:mk798ab7885b095925d55c843b0600e7ea181045 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:28:05.081136    8466 out.go:177] * Starting "ha-310000" primary control-plane node in "ha-310000" cluster
	I1204 15:28:05.085168    8466 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:28:05.085185    8466 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 15:28:05.085197    8466 cache.go:56] Caching tarball of preloaded images
	I1204 15:28:05.085283    8466 preload.go:172] Found /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 15:28:05.085289    8466 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 15:28:05.085341    8466 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/ha-310000/config.json ...
	I1204 15:28:05.085834    8466 start.go:360] acquireMachinesLock for ha-310000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:28:05.085885    8466 start.go:364] duration metric: took 44.25µs to acquireMachinesLock for "ha-310000"
	I1204 15:28:05.085895    8466 start.go:96] Skipping create...Using existing machine configuration
	I1204 15:28:05.085899    8466 fix.go:54] fixHost starting: 
	I1204 15:28:05.086019    8466 fix.go:112] recreateIfNeeded on ha-310000: state=Stopped err=<nil>
	W1204 15:28:05.086027    8466 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 15:28:05.093186    8466 out.go:177] * Restarting existing qemu2 VM for "ha-310000" ...
	I1204 15:28:05.097117    8466 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:28:05.097157    8466 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/ha-310000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/ha-310000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/ha-310000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:43:12:47:f1:0b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/ha-310000/disk.qcow2
	I1204 15:28:05.099569    8466 main.go:141] libmachine: STDOUT: 
	I1204 15:28:05.099590    8466 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:28:05.099619    8466 fix.go:56] duration metric: took 13.718333ms for fixHost
	I1204 15:28:05.099623    8466 start.go:83] releasing machines lock for "ha-310000", held for 13.733875ms
	W1204 15:28:05.099631    8466 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 15:28:05.099672    8466 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:28:05.099677    8466 start.go:729] Will try again in 5 seconds ...
	I1204 15:28:10.101942    8466 start.go:360] acquireMachinesLock for ha-310000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:28:10.102490    8466 start.go:364] duration metric: took 411.834µs to acquireMachinesLock for "ha-310000"
	I1204 15:28:10.102686    8466 start.go:96] Skipping create...Using existing machine configuration
	I1204 15:28:10.102707    8466 fix.go:54] fixHost starting: 
	I1204 15:28:10.103444    8466 fix.go:112] recreateIfNeeded on ha-310000: state=Stopped err=<nil>
	W1204 15:28:10.103472    8466 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 15:28:10.113021    8466 out.go:177] * Restarting existing qemu2 VM for "ha-310000" ...
	I1204 15:28:10.117084    8466 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:28:10.117392    8466 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/ha-310000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/ha-310000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/ha-310000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:43:12:47:f1:0b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/ha-310000/disk.qcow2
	I1204 15:28:10.127475    8466 main.go:141] libmachine: STDOUT: 
	I1204 15:28:10.127529    8466 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:28:10.127608    8466 fix.go:56] duration metric: took 24.900875ms for fixHost
	I1204 15:28:10.127623    8466 start.go:83] releasing machines lock for "ha-310000", held for 25.082375ms
	W1204 15:28:10.127834    8466 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-310000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-310000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:28:10.136086    8466 out.go:201] 
	W1204 15:28:10.139961    8466 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 15:28:10.140020    8466 out.go:270] * 
	* 
	W1204 15:28:10.143014    8466 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 15:28:10.151048    8466 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-310000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-310000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-310000 -n ha-310000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-310000 -n ha-310000: exit status 7 (36.36025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-310000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (9.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-darwin-arm64 -p ha-310000 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-310000 node delete m03 -v=7 --alsologtostderr: exit status 83 (45.214292ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-310000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-310000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:28:10.311080    8484 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:28:10.311492    8484 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:28:10.311496    8484 out.go:358] Setting ErrFile to fd 2...
	I1204 15:28:10.311499    8484 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:28:10.311668    8484 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:28:10.311904    8484 mustload.go:65] Loading cluster: ha-310000
	I1204 15:28:10.312128    8484 config.go:182] Loaded profile config "ha-310000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:28:10.316149    8484 out.go:177] * The control-plane node ha-310000 host is not running: state=Stopped
	I1204 15:28:10.319168    8484 out.go:177]   To start a cluster, run: "minikube start -p ha-310000"

                                                
                                                
** /stderr **
ha_test.go:491: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-310000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:495: (dbg) Run:  out/minikube-darwin-arm64 -p ha-310000 status -v=7 --alsologtostderr
ha_test.go:495: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-310000 status -v=7 --alsologtostderr: exit status 7 (35.068875ms)

                                                
                                                
-- stdout --
	ha-310000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:28:10.356321    8486 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:28:10.356520    8486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:28:10.356523    8486 out.go:358] Setting ErrFile to fd 2...
	I1204 15:28:10.356525    8486 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:28:10.356666    8486 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:28:10.356793    8486 out.go:352] Setting JSON to false
	I1204 15:28:10.356805    8486 mustload.go:65] Loading cluster: ha-310000
	I1204 15:28:10.356852    8486 notify.go:220] Checking for updates...
	I1204 15:28:10.357014    8486 config.go:182] Loaded profile config "ha-310000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:28:10.357023    8486 status.go:174] checking status of ha-310000 ...
	I1204 15:28:10.357274    8486 status.go:371] ha-310000 host status = "Stopped" (err=<nil>)
	I1204 15:28:10.357277    8486 status.go:384] host is not running, skipping remaining checks
	I1204 15:28:10.357279    8486 status.go:176] ha-310000 status: &{Name:ha-310000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-310000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-310000 -n ha-310000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-310000 -n ha-310000: exit status 7 (34.204334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-310000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-310000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-310000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-310000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-310000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31
.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuth
Sock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-310000 -n ha-310000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-310000 -n ha-310000: exit status 7 (34.969542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-310000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (3.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-darwin-arm64 -p ha-310000 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-darwin-arm64 -p ha-310000 stop -v=7 --alsologtostderr: (3.461120875s)
ha_test.go:539: (dbg) Run:  out/minikube-darwin-arm64 -p ha-310000 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-310000 status -v=7 --alsologtostderr: exit status 7 (72.054583ms)

                                                
                                                
-- stdout --
	ha-310000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:28:14.011241    8515 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:28:14.011495    8515 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:28:14.011500    8515 out.go:358] Setting ErrFile to fd 2...
	I1204 15:28:14.011502    8515 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:28:14.011679    8515 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:28:14.011824    8515 out.go:352] Setting JSON to false
	I1204 15:28:14.011838    8515 mustload.go:65] Loading cluster: ha-310000
	I1204 15:28:14.011880    8515 notify.go:220] Checking for updates...
	I1204 15:28:14.012115    8515 config.go:182] Loaded profile config "ha-310000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:28:14.012122    8515 status.go:174] checking status of ha-310000 ...
	I1204 15:28:14.012413    8515 status.go:371] ha-310000 host status = "Stopped" (err=<nil>)
	I1204 15:28:14.012417    8515 status.go:384] host is not running, skipping remaining checks
	I1204 15:28:14.012419    8515 status.go:176] ha-310000 status: &{Name:ha-310000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-310000 status -v=7 --alsologtostderr": ha-310000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-310000 status -v=7 --alsologtostderr": ha-310000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-310000 status -v=7 --alsologtostderr": ha-310000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-310000 -n ha-310000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-310000 -n ha-310000: exit status 7 (36.365041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-310000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (3.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-310000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:562: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-310000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.189260292s)

                                                
                                                
-- stdout --
	* [ha-310000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-310000" primary control-plane node in "ha-310000" cluster
	* Restarting existing qemu2 VM for "ha-310000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-310000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:28:14.082907    8519 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:28:14.083064    8519 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:28:14.083067    8519 out.go:358] Setting ErrFile to fd 2...
	I1204 15:28:14.083069    8519 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:28:14.083216    8519 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:28:14.084313    8519 out.go:352] Setting JSON to false
	I1204 15:28:14.102012    8519 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5264,"bootTime":1733349630,"procs":543,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 15:28:14.102083    8519 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:28:14.106367    8519 out.go:177] * [ha-310000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 15:28:14.113348    8519 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 15:28:14.113415    8519 notify.go:220] Checking for updates...
	I1204 15:28:14.121327    8519 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	I1204 15:28:14.124262    8519 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 15:28:14.127307    8519 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:28:14.130289    8519 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	I1204 15:28:14.131629    8519 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 15:28:14.134631    8519 config.go:182] Loaded profile config "ha-310000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:28:14.134913    8519 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:28:14.138283    8519 out.go:177] * Using the qemu2 driver based on existing profile
	I1204 15:28:14.143297    8519 start.go:297] selected driver: qemu2
	I1204 15:28:14.143303    8519 start.go:901] validating driver "qemu2" against &{Name:ha-310000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.2 ClusterName:ha-310000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:28:14.143361    8519 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 15:28:14.145764    8519 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 15:28:14.145787    8519 cni.go:84] Creating CNI manager for ""
	I1204 15:28:14.145818    8519 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1204 15:28:14.145863    8519 start.go:340] cluster config:
	{Name:ha-310000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-310000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:28:14.150093    8519 iso.go:125] acquiring lock: {Name:mk798ab7885b095925d55c843b0600e7ea181045 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:28:14.158248    8519 out.go:177] * Starting "ha-310000" primary control-plane node in "ha-310000" cluster
	I1204 15:28:14.162307    8519 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:28:14.162329    8519 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 15:28:14.162339    8519 cache.go:56] Caching tarball of preloaded images
	I1204 15:28:14.162402    8519 preload.go:172] Found /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 15:28:14.162408    8519 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 15:28:14.162482    8519 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/ha-310000/config.json ...
	I1204 15:28:14.163037    8519 start.go:360] acquireMachinesLock for ha-310000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:28:14.163070    8519 start.go:364] duration metric: took 26.042µs to acquireMachinesLock for "ha-310000"
	I1204 15:28:14.163081    8519 start.go:96] Skipping create...Using existing machine configuration
	I1204 15:28:14.163085    8519 fix.go:54] fixHost starting: 
	I1204 15:28:14.163207    8519 fix.go:112] recreateIfNeeded on ha-310000: state=Stopped err=<nil>
	W1204 15:28:14.163214    8519 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 15:28:14.171266    8519 out.go:177] * Restarting existing qemu2 VM for "ha-310000" ...
	I1204 15:28:14.175290    8519 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:28:14.175325    8519 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/ha-310000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/ha-310000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/ha-310000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:43:12:47:f1:0b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/ha-310000/disk.qcow2
	I1204 15:28:14.177648    8519 main.go:141] libmachine: STDOUT: 
	I1204 15:28:14.177667    8519 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:28:14.177697    8519 fix.go:56] duration metric: took 14.609875ms for fixHost
	I1204 15:28:14.177702    8519 start.go:83] releasing machines lock for "ha-310000", held for 14.626542ms
	W1204 15:28:14.177709    8519 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 15:28:14.177746    8519 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:28:14.177751    8519 start.go:729] Will try again in 5 seconds ...
	I1204 15:28:19.180081    8519 start.go:360] acquireMachinesLock for ha-310000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:28:19.180551    8519 start.go:364] duration metric: took 330.375µs to acquireMachinesLock for "ha-310000"
	I1204 15:28:19.180694    8519 start.go:96] Skipping create...Using existing machine configuration
	I1204 15:28:19.180717    8519 fix.go:54] fixHost starting: 
	I1204 15:28:19.181497    8519 fix.go:112] recreateIfNeeded on ha-310000: state=Stopped err=<nil>
	W1204 15:28:19.181524    8519 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 15:28:19.186137    8519 out.go:177] * Restarting existing qemu2 VM for "ha-310000" ...
	I1204 15:28:19.192989    8519 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:28:19.193257    8519 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/ha-310000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/ha-310000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/ha-310000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:43:12:47:f1:0b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/ha-310000/disk.qcow2
	I1204 15:28:19.203491    8519 main.go:141] libmachine: STDOUT: 
	I1204 15:28:19.203573    8519 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:28:19.203680    8519 fix.go:56] duration metric: took 22.965084ms for fixHost
	I1204 15:28:19.203700    8519 start.go:83] releasing machines lock for "ha-310000", held for 23.124292ms
	W1204 15:28:19.203948    8519 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-310000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-310000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:28:19.211971    8519 out.go:201] 
	W1204 15:28:19.216082    8519 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 15:28:19.216120    8519 out.go:270] * 
	* 
	W1204 15:28:19.218475    8519 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 15:28:19.225954    8519 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:564: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-310000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-310000 -n ha-310000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-310000 -n ha-310000: exit status 7 (73.898792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-310000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-310000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-310000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-310000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-310000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31
.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuth
Sock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-310000 -n ha-310000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-310000 -n ha-310000: exit status 7 (35.144292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-310000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-310000 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-310000 --control-plane -v=7 --alsologtostderr: exit status 83 (47.587458ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-310000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-310000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:28:19.437304    8538 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:28:19.437476    8538 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:28:19.437480    8538 out.go:358] Setting ErrFile to fd 2...
	I1204 15:28:19.437482    8538 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:28:19.437599    8538 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:28:19.437831    8538 mustload.go:65] Loading cluster: ha-310000
	I1204 15:28:19.438044    8538 config.go:182] Loaded profile config "ha-310000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:28:19.442587    8538 out.go:177] * The control-plane node ha-310000 host is not running: state=Stopped
	I1204 15:28:19.446645    8538 out.go:177]   To start a cluster, run: "minikube start -p ha-310000"

                                                
                                                
** /stderr **
ha_test.go:609: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-310000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-310000 -n ha-310000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-310000 -n ha-310000: exit status 7 (35.053625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-310000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-310000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-310000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-310000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-310000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerR
untime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSH
AgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-310000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-310000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-310000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-310000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\
",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSoc
k\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-310000 -n ha-310000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-310000 -n ha-310000: exit status 7 (34.478709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-310000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.09s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.91s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-457000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-457000 --driver=qemu2 : exit status 80 (9.839502s)

                                                
                                                
-- stdout --
	* [image-457000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-457000" primary control-plane node in "image-457000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-457000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-457000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-457000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-457000 -n image-457000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-457000 -n image-457000: exit status 7 (72.553709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-457000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.91s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.89s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-118000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-118000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.884788417s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1a19e4c7-1e93-48f2-bc42-e4b650486e8b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-118000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"54eb9aad-cb2e-4627-811a-959441a14b1d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20045"}}
	{"specversion":"1.0","id":"9f94d44c-0bf1-45ef-b31c-df246cba83f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig"}}
	{"specversion":"1.0","id":"986a5a25-ed3a-4918-ad80-a363373ae0e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"3d40949e-ee14-4a38-9dda-f6aa4e321589","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"064f9ee5-0816-4ade-92ce-e959deb4a315","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube"}}
	{"specversion":"1.0","id":"f2513bba-39a0-43f7-b91e-d6b814e61ec7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"199e670d-a232-4d48-8b32-30a8c25703b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a1ec6308-7b17-45f8-9087-71b6cf960cf1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"a45d218b-2af5-4059-a6d4-c98fa143fb70","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-118000\" primary control-plane node in \"json-output-118000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"a57dd7fb-92e5-430a-ac5d-e691daeca34c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"d9b0cd3a-0a42-4825-a58a-0580ad01a488","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-118000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"10e27087-67bc-48c4-aac5-87b5e3a6102b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"109744b5-e89b-448f-8314-c320fc74d4f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"342ab93b-d40e-45bd-aa70-ea7ad7796cca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-118000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"6590dd84-c081-4f86-8a9d-1d6d969d6e2d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"0ae704af-1447-4fb3-8c30-7faa9d421210","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-118000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.89s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.09s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-118000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-118000 --output=json --user=testUser: exit status 83 (85.238291ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3d99ee46-8cf9-448a-9549-7cd4429083fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-118000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"a737b102-9654-477f-9a91-a0e4949e34f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-118000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-118000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.09s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-118000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-118000 --output=json --user=testUser: exit status 83 (47.163583ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-118000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-118000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-118000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-118000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.28s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-518000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-518000 --driver=qemu2 : exit status 80 (9.968767417s)

                                                
                                                
-- stdout --
	* [first-518000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-518000" primary control-plane node in "first-518000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-518000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-518000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-518000 --driver=qemu2 ": exit status 80
panic.go:629: *** TestMinikubeProfile FAILED at 2024-12-04 15:28:53.699802 -0800 PST m=+478.729318626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-520000 -n second-520000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-520000 -n second-520000: exit status 85 (83.088ms)

                                                
                                                
-- stdout --
	* Profile "second-520000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-520000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-520000" host is not running, skipping log retrieval (state="* Profile \"second-520000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-520000\"")
helpers_test.go:175: Cleaning up "second-520000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-520000
panic.go:629: *** TestMinikubeProfile FAILED at 2024-12-04 15:28:53.896988 -0800 PST m=+478.926503293
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-518000 -n first-518000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-518000 -n first-518000: exit status 7 (34.87425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-518000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-518000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-518000
--- FAIL: TestMinikubeProfile (10.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.99s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-720000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-720000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (9.919146666s)

                                                
                                                
-- stdout --
	* [mount-start-1-720000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-720000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-720000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-720000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-720000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-720000 -n mount-start-1-720000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-720000 -n mount-start-1-720000: exit status 7 (73.170875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-720000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (9.99s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-093000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-093000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.910716292s)

                                                
                                                
-- stdout --
	* [multinode-093000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-093000" primary control-plane node in "multinode-093000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-093000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:29:04.241877    8735 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:29:04.242047    8735 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:29:04.242050    8735 out.go:358] Setting ErrFile to fd 2...
	I1204 15:29:04.242053    8735 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:29:04.242183    8735 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:29:04.243335    8735 out.go:352] Setting JSON to false
	I1204 15:29:04.261021    8735 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5314,"bootTime":1733349630,"procs":545,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 15:29:04.261100    8735 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:29:04.267178    8735 out.go:177] * [multinode-093000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 15:29:04.275089    8735 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 15:29:04.275152    8735 notify.go:220] Checking for updates...
	I1204 15:29:04.283026    8735 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	I1204 15:29:04.286050    8735 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 15:29:04.289110    8735 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:29:04.290534    8735 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	I1204 15:29:04.294099    8735 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 15:29:04.297273    8735 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:29:04.301909    8735 out.go:177] * Using the qemu2 driver based on user configuration
	I1204 15:29:04.309118    8735 start.go:297] selected driver: qemu2
	I1204 15:29:04.309125    8735 start.go:901] validating driver "qemu2" against <nil>
	I1204 15:29:04.309132    8735 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 15:29:04.311648    8735 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 15:29:04.315933    8735 out.go:177] * Automatically selected the socket_vmnet network
	I1204 15:29:04.319174    8735 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 15:29:04.319196    8735 cni.go:84] Creating CNI manager for ""
	I1204 15:29:04.319224    8735 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1204 15:29:04.319230    8735 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1204 15:29:04.319265    8735 start.go:340] cluster config:
	{Name:multinode-093000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-093000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:29:04.324159    8735 iso.go:125] acquiring lock: {Name:mk798ab7885b095925d55c843b0600e7ea181045 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:29:04.332091    8735 out.go:177] * Starting "multinode-093000" primary control-plane node in "multinode-093000" cluster
	I1204 15:29:04.336090    8735 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:29:04.336106    8735 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 15:29:04.336114    8735 cache.go:56] Caching tarball of preloaded images
	I1204 15:29:04.336186    8735 preload.go:172] Found /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 15:29:04.336192    8735 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 15:29:04.336406    8735 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/multinode-093000/config.json ...
	I1204 15:29:04.336418    8735 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/multinode-093000/config.json: {Name:mk0f29aa2948d07b7427ab2525370f56eaf81ecd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:29:04.336882    8735 start.go:360] acquireMachinesLock for multinode-093000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:29:04.336934    8735 start.go:364] duration metric: took 46.5µs to acquireMachinesLock for "multinode-093000"
	I1204 15:29:04.336948    8735 start.go:93] Provisioning new machine with config: &{Name:multinode-093000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.2 ClusterName:multinode-093000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:29:04.336985    8735 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 15:29:04.342079    8735 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 15:29:04.360225    8735 start.go:159] libmachine.API.Create for "multinode-093000" (driver="qemu2")
	I1204 15:29:04.360252    8735 client.go:168] LocalClient.Create starting
	I1204 15:29:04.360327    8735 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem
	I1204 15:29:04.360367    8735 main.go:141] libmachine: Decoding PEM data...
	I1204 15:29:04.360382    8735 main.go:141] libmachine: Parsing certificate...
	I1204 15:29:04.360419    8735 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem
	I1204 15:29:04.360452    8735 main.go:141] libmachine: Decoding PEM data...
	I1204 15:29:04.360462    8735 main.go:141] libmachine: Parsing certificate...
	I1204 15:29:04.360908    8735 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-6982/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 15:29:04.521428    8735 main.go:141] libmachine: Creating SSH key...
	I1204 15:29:04.686471    8735 main.go:141] libmachine: Creating Disk image...
	I1204 15:29:04.686479    8735 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 15:29:04.686712    8735 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/multinode-093000/disk.qcow2.raw /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/multinode-093000/disk.qcow2
	I1204 15:29:04.696980    8735 main.go:141] libmachine: STDOUT: 
	I1204 15:29:04.696995    8735 main.go:141] libmachine: STDERR: 
	I1204 15:29:04.697054    8735 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/multinode-093000/disk.qcow2 +20000M
	I1204 15:29:04.705521    8735 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 15:29:04.705537    8735 main.go:141] libmachine: STDERR: 
	I1204 15:29:04.705557    8735 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/multinode-093000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/multinode-093000/disk.qcow2
	I1204 15:29:04.705562    8735 main.go:141] libmachine: Starting QEMU VM...
	I1204 15:29:04.705575    8735 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:29:04.705604    8735 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/multinode-093000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/multinode-093000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/multinode-093000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:4b:00:fa:98:96 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/multinode-093000/disk.qcow2
	I1204 15:29:04.707426    8735 main.go:141] libmachine: STDOUT: 
	I1204 15:29:04.707443    8735 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:29:04.707464    8735 client.go:171] duration metric: took 347.203375ms to LocalClient.Create
	I1204 15:29:06.709699    8735 start.go:128] duration metric: took 2.372658834s to createHost
	I1204 15:29:06.709780    8735 start.go:83] releasing machines lock for "multinode-093000", held for 2.372813583s
	W1204 15:29:06.709966    8735 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:29:06.728595    8735 out.go:177] * Deleting "multinode-093000" in qemu2 ...
	W1204 15:29:06.757592    8735 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:29:06.757666    8735 start.go:729] Will try again in 5 seconds ...
	I1204 15:29:11.759925    8735 start.go:360] acquireMachinesLock for multinode-093000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:29:11.760596    8735 start.go:364] duration metric: took 524.625µs to acquireMachinesLock for "multinode-093000"
	I1204 15:29:11.760739    8735 start.go:93] Provisioning new machine with config: &{Name:multinode-093000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.2 ClusterName:multinode-093000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:29:11.761039    8735 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 15:29:11.779193    8735 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 15:29:11.829380    8735 start.go:159] libmachine.API.Create for "multinode-093000" (driver="qemu2")
	I1204 15:29:11.829443    8735 client.go:168] LocalClient.Create starting
	I1204 15:29:11.829579    8735 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem
	I1204 15:29:11.829656    8735 main.go:141] libmachine: Decoding PEM data...
	I1204 15:29:11.829673    8735 main.go:141] libmachine: Parsing certificate...
	I1204 15:29:11.829736    8735 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem
	I1204 15:29:11.829791    8735 main.go:141] libmachine: Decoding PEM data...
	I1204 15:29:11.829801    8735 main.go:141] libmachine: Parsing certificate...
	I1204 15:29:11.830736    8735 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-6982/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 15:29:12.003495    8735 main.go:141] libmachine: Creating SSH key...
	I1204 15:29:12.043519    8735 main.go:141] libmachine: Creating Disk image...
	I1204 15:29:12.043524    8735 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 15:29:12.043707    8735 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/multinode-093000/disk.qcow2.raw /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/multinode-093000/disk.qcow2
	I1204 15:29:12.053277    8735 main.go:141] libmachine: STDOUT: 
	I1204 15:29:12.053299    8735 main.go:141] libmachine: STDERR: 
	I1204 15:29:12.053355    8735 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/multinode-093000/disk.qcow2 +20000M
	I1204 15:29:12.061668    8735 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 15:29:12.061681    8735 main.go:141] libmachine: STDERR: 
	I1204 15:29:12.061722    8735 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/multinode-093000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/multinode-093000/disk.qcow2
	I1204 15:29:12.061742    8735 main.go:141] libmachine: Starting QEMU VM...
	I1204 15:29:12.061749    8735 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:29:12.061775    8735 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/multinode-093000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/multinode-093000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/multinode-093000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:b5:e3:08:6b:ff -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/multinode-093000/disk.qcow2
	I1204 15:29:12.063441    8735 main.go:141] libmachine: STDOUT: 
	I1204 15:29:12.063452    8735 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:29:12.063469    8735 client.go:171] duration metric: took 234.019084ms to LocalClient.Create
	I1204 15:29:14.065699    8735 start.go:128] duration metric: took 2.304591375s to createHost
	I1204 15:29:14.065777    8735 start.go:83] releasing machines lock for "multinode-093000", held for 2.305133625s
	W1204 15:29:14.066119    8735 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-093000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-093000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:29:14.082906    8735 out.go:201] 
	W1204 15:29:14.087895    8735 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 15:29:14.087948    8735 out.go:270] * 
	* 
	W1204 15:29:14.090403    8735 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 15:29:14.106863    8735 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-093000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-093000 -n multinode-093000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-093000 -n multinode-093000: exit status 7 (73.187125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-093000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.99s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (102.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-093000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-093000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (64.459917ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-093000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-093000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-093000 -- rollout status deployment/busybox: exit status 1 (61.516125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-093000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-093000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-093000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (62.317792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-093000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1204 15:29:14.384301    7495 retry.go:31] will retry after 1.176558217s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-093000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-093000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.138584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-093000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1204 15:29:15.670111    7495 retry.go:31] will retry after 1.739906817s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-093000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-093000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (110.216875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-093000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1204 15:29:17.522679    7495 retry.go:31] will retry after 2.810816882s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-093000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-093000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.938ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-093000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1204 15:29:20.443963    7495 retry.go:31] will retry after 3.110276906s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-093000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-093000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.248792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-093000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1204 15:29:23.664967    7495 retry.go:31] will retry after 5.415496277s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-093000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-093000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.294208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-093000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1204 15:29:29.189304    7495 retry.go:31] will retry after 10.635193048s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-093000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-093000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.826792ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-093000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1204 15:29:39.934825    7495 retry.go:31] will retry after 9.056060782s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-093000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-093000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.236458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-093000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1204 15:29:49.102668    7495 retry.go:31] will retry after 13.682327439s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-093000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-093000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (102.914292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-093000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1204 15:30:02.890957    7495 retry.go:31] will retry after 31.408867046s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-093000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-093000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.903959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-093000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1204 15:30:34.410542    7495 retry.go:31] will retry after 21.483106503s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-093000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-093000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.163042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-093000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-093000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-093000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (61.317125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-093000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-093000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-093000 -- exec  -- nslookup kubernetes.io: exit status 1 (62.45775ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-093000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-093000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-093000 -- exec  -- nslookup kubernetes.default: exit status 1 (63.107834ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-093000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-093000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-093000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (62.248417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-093000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-093000 -n multinode-093000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-093000 -n multinode-093000: exit status 7 (35.35325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-093000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (102.09s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-093000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-093000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (62.829042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-093000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-093000 -n multinode-093000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-093000 -n multinode-093000: exit status 7 (34.799375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-093000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-093000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-093000 -v 3 --alsologtostderr: exit status 83 (48.354041ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-093000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-093000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:30:56.421027    9202 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:30:56.421220    9202 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:30:56.421223    9202 out.go:358] Setting ErrFile to fd 2...
	I1204 15:30:56.421225    9202 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:30:56.421352    9202 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:30:56.421591    9202 mustload.go:65] Loading cluster: multinode-093000
	I1204 15:30:56.421809    9202 config.go:182] Loaded profile config "multinode-093000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:30:56.426253    9202 out.go:177] * The control-plane node multinode-093000 host is not running: state=Stopped
	I1204 15:30:56.430201    9202 out.go:177]   To start a cluster, run: "minikube start -p multinode-093000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-093000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-093000 -n multinode-093000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-093000 -n multinode-093000: exit status 7 (35.568333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-093000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-093000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-093000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.587042ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-093000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-093000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-093000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-093000 -n multinode-093000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-093000 -n multinode-093000: exit status 7 (34.987125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-093000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-093000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-093000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"multinode-093000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMN
UMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"multinode-093000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVe
rsion\":\"v1.31.2\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\"
:\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-093000 -n multinode-093000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-093000 -n multinode-093000: exit status 7 (34.668542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-093000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-093000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-093000 status --output json --alsologtostderr: exit status 7 (34.844458ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-093000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:30:56.654360    9216 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:30:56.654528    9216 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:30:56.654531    9216 out.go:358] Setting ErrFile to fd 2...
	I1204 15:30:56.654534    9216 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:30:56.654664    9216 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:30:56.654774    9216 out.go:352] Setting JSON to true
	I1204 15:30:56.654787    9216 mustload.go:65] Loading cluster: multinode-093000
	I1204 15:30:56.654840    9216 notify.go:220] Checking for updates...
	I1204 15:30:56.655012    9216 config.go:182] Loaded profile config "multinode-093000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:30:56.655022    9216 status.go:174] checking status of multinode-093000 ...
	I1204 15:30:56.655278    9216 status.go:371] multinode-093000 host status = "Stopped" (err=<nil>)
	I1204 15:30:56.655281    9216 status.go:384] host is not running, skipping remaining checks
	I1204 15:30:56.655283    9216 status.go:176] multinode-093000 status: &{Name:multinode-093000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-093000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-093000 -n multinode-093000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-093000 -n multinode-093000: exit status 7 (35.096459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-093000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-093000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-093000 node stop m03: exit status 85 (49.441416ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-093000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-093000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-093000 status: exit status 7 (35.569541ms)

                                                
                                                
-- stdout --
	multinode-093000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-093000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-093000 status --alsologtostderr: exit status 7 (35.029083ms)

                                                
                                                
-- stdout --
	multinode-093000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:30:56.810398    9224 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:30:56.810586    9224 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:30:56.810589    9224 out.go:358] Setting ErrFile to fd 2...
	I1204 15:30:56.810591    9224 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:30:56.810731    9224 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:30:56.810870    9224 out.go:352] Setting JSON to false
	I1204 15:30:56.810881    9224 mustload.go:65] Loading cluster: multinode-093000
	I1204 15:30:56.810941    9224 notify.go:220] Checking for updates...
	I1204 15:30:56.811072    9224 config.go:182] Loaded profile config "multinode-093000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:30:56.811081    9224 status.go:174] checking status of multinode-093000 ...
	I1204 15:30:56.811325    9224 status.go:371] multinode-093000 host status = "Stopped" (err=<nil>)
	I1204 15:30:56.811329    9224 status.go:384] host is not running, skipping remaining checks
	I1204 15:30:56.811331    9224 status.go:176] multinode-093000 status: &{Name:multinode-093000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-093000 status --alsologtostderr": multinode-093000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-093000 -n multinode-093000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-093000 -n multinode-093000: exit status 7 (34.473375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-093000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.15s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (42.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-093000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-093000 node start m03 -v=7 --alsologtostderr: exit status 85 (53.491ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:30:56.880340    9228 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:30:56.880765    9228 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:30:56.880769    9228 out.go:358] Setting ErrFile to fd 2...
	I1204 15:30:56.880772    9228 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:30:56.880965    9228 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:30:56.881206    9228 mustload.go:65] Loading cluster: multinode-093000
	I1204 15:30:56.881418    9228 config.go:182] Loaded profile config "multinode-093000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:30:56.885593    9228 out.go:201] 
	W1204 15:30:56.889552    9228 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W1204 15:30:56.889557    9228 out.go:270] * 
	* 
	W1204 15:30:56.891344    9228 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 15:30:56.895557    9228 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I1204 15:30:56.880340    9228 out.go:345] Setting OutFile to fd 1 ...
I1204 15:30:56.880765    9228 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 15:30:56.880769    9228 out.go:358] Setting ErrFile to fd 2...
I1204 15:30:56.880772    9228 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 15:30:56.880965    9228 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
I1204 15:30:56.881206    9228 mustload.go:65] Loading cluster: multinode-093000
I1204 15:30:56.881418    9228 config.go:182] Loaded profile config "multinode-093000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
I1204 15:30:56.885593    9228 out.go:201] 
W1204 15:30:56.889552    9228 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W1204 15:30:56.889557    9228 out.go:270] * 
* 
W1204 15:30:56.891344    9228 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1204 15:30:56.895557    9228 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-093000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-093000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-093000 status -v=7 --alsologtostderr: exit status 7 (34.582375ms)

                                                
                                                
-- stdout --
	multinode-093000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:30:56.933428    9230 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:30:56.933614    9230 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:30:56.933617    9230 out.go:358] Setting ErrFile to fd 2...
	I1204 15:30:56.933620    9230 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:30:56.933736    9230 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:30:56.933867    9230 out.go:352] Setting JSON to false
	I1204 15:30:56.933876    9230 mustload.go:65] Loading cluster: multinode-093000
	I1204 15:30:56.933932    9230 notify.go:220] Checking for updates...
	I1204 15:30:56.934077    9230 config.go:182] Loaded profile config "multinode-093000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:30:56.934084    9230 status.go:174] checking status of multinode-093000 ...
	I1204 15:30:56.934319    9230 status.go:371] multinode-093000 host status = "Stopped" (err=<nil>)
	I1204 15:30:56.934323    9230 status.go:384] host is not running, skipping remaining checks
	I1204 15:30:56.934325    9230 status.go:176] multinode-093000 status: &{Name:multinode-093000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1204 15:30:56.935255    7495 retry.go:31] will retry after 949.460762ms: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-093000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-093000 status -v=7 --alsologtostderr: exit status 7 (78.02275ms)

                                                
                                                
-- stdout --
	multinode-093000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:30:57.963088    9234 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:30:57.963329    9234 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:30:57.963333    9234 out.go:358] Setting ErrFile to fd 2...
	I1204 15:30:57.963336    9234 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:30:57.963482    9234 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:30:57.963637    9234 out.go:352] Setting JSON to false
	I1204 15:30:57.963650    9234 mustload.go:65] Loading cluster: multinode-093000
	I1204 15:30:57.963687    9234 notify.go:220] Checking for updates...
	I1204 15:30:57.963892    9234 config.go:182] Loaded profile config "multinode-093000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:30:57.963901    9234 status.go:174] checking status of multinode-093000 ...
	I1204 15:30:57.964192    9234 status.go:371] multinode-093000 host status = "Stopped" (err=<nil>)
	I1204 15:30:57.964197    9234 status.go:384] host is not running, skipping remaining checks
	I1204 15:30:57.964199    9234 status.go:176] multinode-093000 status: &{Name:multinode-093000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1204 15:30:57.965251    7495 retry.go:31] will retry after 1.686603111s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-093000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-093000 status -v=7 --alsologtostderr: exit status 7 (78.229291ms)

                                                
                                                
-- stdout --
	multinode-093000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:30:59.730485    9238 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:30:59.730719    9238 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:30:59.730723    9238 out.go:358] Setting ErrFile to fd 2...
	I1204 15:30:59.730726    9238 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:30:59.730871    9238 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:30:59.731023    9238 out.go:352] Setting JSON to false
	I1204 15:30:59.731037    9238 mustload.go:65] Loading cluster: multinode-093000
	I1204 15:30:59.731079    9238 notify.go:220] Checking for updates...
	I1204 15:30:59.731289    9238 config.go:182] Loaded profile config "multinode-093000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:30:59.731297    9238 status.go:174] checking status of multinode-093000 ...
	I1204 15:30:59.731599    9238 status.go:371] multinode-093000 host status = "Stopped" (err=<nil>)
	I1204 15:30:59.731603    9238 status.go:384] host is not running, skipping remaining checks
	I1204 15:30:59.731606    9238 status.go:176] multinode-093000 status: &{Name:multinode-093000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1204 15:30:59.732612    7495 retry.go:31] will retry after 2.272671858s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-093000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-093000 status -v=7 --alsologtostderr: exit status 7 (76.933667ms)

                                                
                                                
-- stdout --
	multinode-093000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:31:02.082547    9242 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:31:02.082788    9242 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:31:02.082793    9242 out.go:358] Setting ErrFile to fd 2...
	I1204 15:31:02.082796    9242 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:31:02.082960    9242 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:31:02.083121    9242 out.go:352] Setting JSON to false
	I1204 15:31:02.083136    9242 mustload.go:65] Loading cluster: multinode-093000
	I1204 15:31:02.083177    9242 notify.go:220] Checking for updates...
	I1204 15:31:02.083384    9242 config.go:182] Loaded profile config "multinode-093000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:31:02.083394    9242 status.go:174] checking status of multinode-093000 ...
	I1204 15:31:02.083713    9242 status.go:371] multinode-093000 host status = "Stopped" (err=<nil>)
	I1204 15:31:02.083717    9242 status.go:384] host is not running, skipping remaining checks
	I1204 15:31:02.083720    9242 status.go:176] multinode-093000 status: &{Name:multinode-093000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1204 15:31:02.084720    7495 retry.go:31] will retry after 4.27350092s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-093000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-093000 status -v=7 --alsologtostderr: exit status 7 (78.367791ms)

                                                
                                                
-- stdout --
	multinode-093000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:31:06.436869    9248 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:31:06.437125    9248 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:31:06.437130    9248 out.go:358] Setting ErrFile to fd 2...
	I1204 15:31:06.437133    9248 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:31:06.437308    9248 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:31:06.437483    9248 out.go:352] Setting JSON to false
	I1204 15:31:06.437498    9248 mustload.go:65] Loading cluster: multinode-093000
	I1204 15:31:06.437526    9248 notify.go:220] Checking for updates...
	I1204 15:31:06.437760    9248 config.go:182] Loaded profile config "multinode-093000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:31:06.437768    9248 status.go:174] checking status of multinode-093000 ...
	I1204 15:31:06.438066    9248 status.go:371] multinode-093000 host status = "Stopped" (err=<nil>)
	I1204 15:31:06.438071    9248 status.go:384] host is not running, skipping remaining checks
	I1204 15:31:06.438073    9248 status.go:176] multinode-093000 status: &{Name:multinode-093000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1204 15:31:06.439141    7495 retry.go:31] will retry after 4.539674626s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-093000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-093000 status -v=7 --alsologtostderr: exit status 7 (79.524791ms)

                                                
                                                
-- stdout --
	multinode-093000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:31:11.058544    9253 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:31:11.058805    9253 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:31:11.058809    9253 out.go:358] Setting ErrFile to fd 2...
	I1204 15:31:11.058813    9253 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:31:11.059012    9253 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:31:11.059188    9253 out.go:352] Setting JSON to false
	I1204 15:31:11.059201    9253 mustload.go:65] Loading cluster: multinode-093000
	I1204 15:31:11.059231    9253 notify.go:220] Checking for updates...
	I1204 15:31:11.059478    9253 config.go:182] Loaded profile config "multinode-093000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:31:11.059486    9253 status.go:174] checking status of multinode-093000 ...
	I1204 15:31:11.059786    9253 status.go:371] multinode-093000 host status = "Stopped" (err=<nil>)
	I1204 15:31:11.059790    9253 status.go:384] host is not running, skipping remaining checks
	I1204 15:31:11.059793    9253 status.go:176] multinode-093000 status: &{Name:multinode-093000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1204 15:31:11.060845    7495 retry.go:31] will retry after 8.238765355s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-093000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-093000 status -v=7 --alsologtostderr: exit status 7 (78.321792ms)

                                                
                                                
-- stdout --
	multinode-093000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:31:19.378255    9265 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:31:19.378537    9265 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:31:19.378541    9265 out.go:358] Setting ErrFile to fd 2...
	I1204 15:31:19.378545    9265 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:31:19.378719    9265 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:31:19.378869    9265 out.go:352] Setting JSON to false
	I1204 15:31:19.378884    9265 mustload.go:65] Loading cluster: multinode-093000
	I1204 15:31:19.378924    9265 notify.go:220] Checking for updates...
	I1204 15:31:19.379142    9265 config.go:182] Loaded profile config "multinode-093000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:31:19.379152    9265 status.go:174] checking status of multinode-093000 ...
	I1204 15:31:19.379468    9265 status.go:371] multinode-093000 host status = "Stopped" (err=<nil>)
	I1204 15:31:19.379473    9265 status.go:384] host is not running, skipping remaining checks
	I1204 15:31:19.379475    9265 status.go:176] multinode-093000 status: &{Name:multinode-093000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1204 15:31:19.380483    7495 retry.go:31] will retry after 10.738789282s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-093000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-093000 status -v=7 --alsologtostderr: exit status 7 (77.991ms)

                                                
                                                
-- stdout --
	multinode-093000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:31:30.197807    9273 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:31:30.198042    9273 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:31:30.198046    9273 out.go:358] Setting ErrFile to fd 2...
	I1204 15:31:30.198049    9273 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:31:30.198196    9273 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:31:30.198344    9273 out.go:352] Setting JSON to false
	I1204 15:31:30.198358    9273 mustload.go:65] Loading cluster: multinode-093000
	I1204 15:31:30.198389    9273 notify.go:220] Checking for updates...
	I1204 15:31:30.198606    9273 config.go:182] Loaded profile config "multinode-093000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:31:30.198615    9273 status.go:174] checking status of multinode-093000 ...
	I1204 15:31:30.198910    9273 status.go:371] multinode-093000 host status = "Stopped" (err=<nil>)
	I1204 15:31:30.198914    9273 status.go:384] host is not running, skipping remaining checks
	I1204 15:31:30.198917    9273 status.go:176] multinode-093000 status: &{Name:multinode-093000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1204 15:31:30.199932    7495 retry.go:31] will retry after 9.433669874s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-093000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-093000 status -v=7 --alsologtostderr: exit status 7 (78.552041ms)

                                                
                                                
-- stdout --
	multinode-093000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:31:39.712519    9283 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:31:39.712766    9283 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:31:39.712771    9283 out.go:358] Setting ErrFile to fd 2...
	I1204 15:31:39.712774    9283 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:31:39.712946    9283 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:31:39.713100    9283 out.go:352] Setting JSON to false
	I1204 15:31:39.713112    9283 mustload.go:65] Loading cluster: multinode-093000
	I1204 15:31:39.713150    9283 notify.go:220] Checking for updates...
	I1204 15:31:39.713369    9283 config.go:182] Loaded profile config "multinode-093000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:31:39.713378    9283 status.go:174] checking status of multinode-093000 ...
	I1204 15:31:39.713674    9283 status.go:371] multinode-093000 host status = "Stopped" (err=<nil>)
	I1204 15:31:39.713678    9283 status.go:384] host is not running, skipping remaining checks
	I1204 15:31:39.713681    9283 status.go:176] multinode-093000 status: &{Name:multinode-093000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-093000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-093000 -n multinode-093000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-093000 -n multinode-093000: exit status 7 (36.56925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-093000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (42.90s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-093000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-093000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-093000: (3.542332833s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-093000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-093000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.225146334s)

                                                
                                                
-- stdout --
	* [multinode-093000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-093000" primary control-plane node in "multinode-093000" cluster
	* Restarting existing qemu2 VM for "multinode-093000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-093000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:31:43.402465    9309 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:31:43.402640    9309 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:31:43.402645    9309 out.go:358] Setting ErrFile to fd 2...
	I1204 15:31:43.402647    9309 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:31:43.402817    9309 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:31:43.404043    9309 out.go:352] Setting JSON to false
	I1204 15:31:43.423843    9309 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5473,"bootTime":1733349630,"procs":545,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 15:31:43.423905    9309 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:31:43.429060    9309 out.go:177] * [multinode-093000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 15:31:43.435990    9309 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 15:31:43.436054    9309 notify.go:220] Checking for updates...
	I1204 15:31:43.440301    9309 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	I1204 15:31:43.443013    9309 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 15:31:43.447002    9309 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:31:43.448375    9309 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	I1204 15:31:43.450962    9309 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 15:31:43.454270    9309 config.go:182] Loaded profile config "multinode-093000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:31:43.454319    9309 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:31:43.456091    9309 out.go:177] * Using the qemu2 driver based on existing profile
	I1204 15:31:43.463046    9309 start.go:297] selected driver: qemu2
	I1204 15:31:43.463053    9309 start.go:901] validating driver "qemu2" against &{Name:multinode-093000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:multinode-093000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:31:43.463107    9309 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 15:31:43.465636    9309 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 15:31:43.465657    9309 cni.go:84] Creating CNI manager for ""
	I1204 15:31:43.465681    9309 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1204 15:31:43.465733    9309 start.go:340] cluster config:
	{Name:multinode-093000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-093000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:31:43.469975    9309 iso.go:125] acquiring lock: {Name:mk798ab7885b095925d55c843b0600e7ea181045 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:31:43.478003    9309 out.go:177] * Starting "multinode-093000" primary control-plane node in "multinode-093000" cluster
	I1204 15:31:43.481847    9309 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:31:43.481866    9309 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 15:31:43.481873    9309 cache.go:56] Caching tarball of preloaded images
	I1204 15:31:43.481945    9309 preload.go:172] Found /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 15:31:43.481951    9309 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 15:31:43.482004    9309 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/multinode-093000/config.json ...
	I1204 15:31:43.482546    9309 start.go:360] acquireMachinesLock for multinode-093000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:31:43.482598    9309 start.go:364] duration metric: took 45.5µs to acquireMachinesLock for "multinode-093000"
	I1204 15:31:43.482608    9309 start.go:96] Skipping create...Using existing machine configuration
	I1204 15:31:43.482612    9309 fix.go:54] fixHost starting: 
	I1204 15:31:43.482736    9309 fix.go:112] recreateIfNeeded on multinode-093000: state=Stopped err=<nil>
	W1204 15:31:43.482747    9309 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 15:31:43.490832    9309 out.go:177] * Restarting existing qemu2 VM for "multinode-093000" ...
	I1204 15:31:43.494969    9309 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:31:43.495020    9309 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/multinode-093000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/multinode-093000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/multinode-093000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:b5:e3:08:6b:ff -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/multinode-093000/disk.qcow2
	I1204 15:31:43.497369    9309 main.go:141] libmachine: STDOUT: 
	I1204 15:31:43.497394    9309 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:31:43.497425    9309 fix.go:56] duration metric: took 14.811209ms for fixHost
	I1204 15:31:43.497429    9309 start.go:83] releasing machines lock for "multinode-093000", held for 14.826458ms
	W1204 15:31:43.497437    9309 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 15:31:43.497476    9309 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:31:43.497482    9309 start.go:729] Will try again in 5 seconds ...
	I1204 15:31:48.499813    9309 start.go:360] acquireMachinesLock for multinode-093000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:31:48.500297    9309 start.go:364] duration metric: took 347.375µs to acquireMachinesLock for "multinode-093000"
	I1204 15:31:48.500454    9309 start.go:96] Skipping create...Using existing machine configuration
	I1204 15:31:48.500474    9309 fix.go:54] fixHost starting: 
	I1204 15:31:48.501233    9309 fix.go:112] recreateIfNeeded on multinode-093000: state=Stopped err=<nil>
	W1204 15:31:48.501259    9309 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 15:31:48.505769    9309 out.go:177] * Restarting existing qemu2 VM for "multinode-093000" ...
	I1204 15:31:48.509861    9309 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:31:48.510082    9309 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/multinode-093000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/multinode-093000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/multinode-093000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:b5:e3:08:6b:ff -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/multinode-093000/disk.qcow2
	I1204 15:31:48.520128    9309 main.go:141] libmachine: STDOUT: 
	I1204 15:31:48.520177    9309 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:31:48.520259    9309 fix.go:56] duration metric: took 19.787208ms for fixHost
	I1204 15:31:48.520273    9309 start.go:83] releasing machines lock for "multinode-093000", held for 19.953958ms
	W1204 15:31:48.520439    9309 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-093000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-093000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:31:48.528734    9309 out.go:201] 
	W1204 15:31:48.530028    9309 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 15:31:48.530049    9309 out.go:270] * 
	* 
	W1204 15:31:48.532557    9309 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 15:31:48.540738    9309 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-093000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-093000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-093000 -n multinode-093000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-093000 -n multinode-093000: exit status 7 (36.53125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-093000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.91s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-093000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-093000 node delete m03: exit status 83 (50.843792ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-093000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-093000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-093000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-093000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-093000 status --alsologtostderr: exit status 7 (35.566583ms)

                                                
                                                
-- stdout --
	multinode-093000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:31:48.749808    9331 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:31:48.750003    9331 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:31:48.750006    9331 out.go:358] Setting ErrFile to fd 2...
	I1204 15:31:48.750008    9331 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:31:48.750143    9331 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:31:48.750279    9331 out.go:352] Setting JSON to false
	I1204 15:31:48.750291    9331 mustload.go:65] Loading cluster: multinode-093000
	I1204 15:31:48.750362    9331 notify.go:220] Checking for updates...
	I1204 15:31:48.750495    9331 config.go:182] Loaded profile config "multinode-093000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:31:48.750504    9331 status.go:174] checking status of multinode-093000 ...
	I1204 15:31:48.750753    9331 status.go:371] multinode-093000 host status = "Stopped" (err=<nil>)
	I1204 15:31:48.750757    9331 status.go:384] host is not running, skipping remaining checks
	I1204 15:31:48.750759    9331 status.go:176] multinode-093000 status: &{Name:multinode-093000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-093000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-093000 -n multinode-093000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-093000 -n multinode-093000: exit status 7 (34.626125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-093000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.12s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-093000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-093000 stop: (3.161659958s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-093000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-093000 status: exit status 7 (74.551625ms)

                                                
                                                
-- stdout --
	multinode-093000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-093000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-093000 status --alsologtostderr: exit status 7 (36.931375ms)

                                                
                                                
-- stdout --
	multinode-093000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:31:52.058902    9358 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:31:52.059096    9358 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:31:52.059099    9358 out.go:358] Setting ErrFile to fd 2...
	I1204 15:31:52.059102    9358 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:31:52.059224    9358 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:31:52.059353    9358 out.go:352] Setting JSON to false
	I1204 15:31:52.059376    9358 mustload.go:65] Loading cluster: multinode-093000
	I1204 15:31:52.059419    9358 notify.go:220] Checking for updates...
	I1204 15:31:52.059594    9358 config.go:182] Loaded profile config "multinode-093000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:31:52.059601    9358 status.go:174] checking status of multinode-093000 ...
	I1204 15:31:52.059836    9358 status.go:371] multinode-093000 host status = "Stopped" (err=<nil>)
	I1204 15:31:52.059839    9358 status.go:384] host is not running, skipping remaining checks
	I1204 15:31:52.059841    9358 status.go:176] multinode-093000 status: &{Name:multinode-093000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-093000 status --alsologtostderr": multinode-093000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-093000 status --alsologtostderr": multinode-093000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-093000 -n multinode-093000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-093000 -n multinode-093000: exit status 7 (35.253125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-093000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.31s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-093000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-093000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.191178625s)

                                                
                                                
-- stdout --
	* [multinode-093000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-093000" primary control-plane node in "multinode-093000" cluster
	* Restarting existing qemu2 VM for "multinode-093000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-093000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:31:52.128227    9362 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:31:52.128385    9362 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:31:52.128397    9362 out.go:358] Setting ErrFile to fd 2...
	I1204 15:31:52.128399    9362 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:31:52.128544    9362 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:31:52.129654    9362 out.go:352] Setting JSON to false
	I1204 15:31:52.147313    9362 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5482,"bootTime":1733349630,"procs":547,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 15:31:52.147395    9362 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:31:52.151673    9362 out.go:177] * [multinode-093000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 15:31:52.159520    9362 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 15:31:52.159555    9362 notify.go:220] Checking for updates...
	I1204 15:31:52.167583    9362 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	I1204 15:31:52.170607    9362 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 15:31:52.173583    9362 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:31:52.176600    9362 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	I1204 15:31:52.179605    9362 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 15:31:52.182857    9362 config.go:182] Loaded profile config "multinode-093000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:31:52.183129    9362 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:31:52.187546    9362 out.go:177] * Using the qemu2 driver based on existing profile
	I1204 15:31:52.194576    9362 start.go:297] selected driver: qemu2
	I1204 15:31:52.194583    9362 start.go:901] validating driver "qemu2" against &{Name:multinode-093000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:multinode-093000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:31:52.194653    9362 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 15:31:52.197286    9362 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 15:31:52.197314    9362 cni.go:84] Creating CNI manager for ""
	I1204 15:31:52.197337    9362 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1204 15:31:52.197383    9362 start.go:340] cluster config:
	{Name:multinode-093000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-093000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:31:52.202021    9362 iso.go:125] acquiring lock: {Name:mk798ab7885b095925d55c843b0600e7ea181045 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:31:52.210535    9362 out.go:177] * Starting "multinode-093000" primary control-plane node in "multinode-093000" cluster
	I1204 15:31:52.214570    9362 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:31:52.214584    9362 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 15:31:52.214597    9362 cache.go:56] Caching tarball of preloaded images
	I1204 15:31:52.214647    9362 preload.go:172] Found /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 15:31:52.214653    9362 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 15:31:52.214703    9362 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/multinode-093000/config.json ...
	I1204 15:31:52.215187    9362 start.go:360] acquireMachinesLock for multinode-093000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:31:52.215220    9362 start.go:364] duration metric: took 25.791µs to acquireMachinesLock for "multinode-093000"
	I1204 15:31:52.215230    9362 start.go:96] Skipping create...Using existing machine configuration
	I1204 15:31:52.215234    9362 fix.go:54] fixHost starting: 
	I1204 15:31:52.215350    9362 fix.go:112] recreateIfNeeded on multinode-093000: state=Stopped err=<nil>
	W1204 15:31:52.215358    9362 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 15:31:52.219579    9362 out.go:177] * Restarting existing qemu2 VM for "multinode-093000" ...
	I1204 15:31:52.227424    9362 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:31:52.227458    9362 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/multinode-093000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/multinode-093000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/multinode-093000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:b5:e3:08:6b:ff -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/multinode-093000/disk.qcow2
	I1204 15:31:52.229714    9362 main.go:141] libmachine: STDOUT: 
	I1204 15:31:52.229733    9362 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:31:52.229764    9362 fix.go:56] duration metric: took 14.527917ms for fixHost
	I1204 15:31:52.229770    9362 start.go:83] releasing machines lock for "multinode-093000", held for 14.544416ms
	W1204 15:31:52.229777    9362 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 15:31:52.229829    9362 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:31:52.229834    9362 start.go:729] Will try again in 5 seconds ...
	I1204 15:31:57.232122    9362 start.go:360] acquireMachinesLock for multinode-093000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:31:57.232533    9362 start.go:364] duration metric: took 303.666µs to acquireMachinesLock for "multinode-093000"
	I1204 15:31:57.232641    9362 start.go:96] Skipping create...Using existing machine configuration
	I1204 15:31:57.232656    9362 fix.go:54] fixHost starting: 
	I1204 15:31:57.233195    9362 fix.go:112] recreateIfNeeded on multinode-093000: state=Stopped err=<nil>
	W1204 15:31:57.233216    9362 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 15:31:57.241754    9362 out.go:177] * Restarting existing qemu2 VM for "multinode-093000" ...
	I1204 15:31:57.244787    9362 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:31:57.244909    9362 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/multinode-093000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/multinode-093000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/multinode-093000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:b5:e3:08:6b:ff -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/multinode-093000/disk.qcow2
	I1204 15:31:57.250625    9362 main.go:141] libmachine: STDOUT: 
	I1204 15:31:57.250743    9362 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:31:57.250858    9362 fix.go:56] duration metric: took 18.197208ms for fixHost
	I1204 15:31:57.250887    9362 start.go:83] releasing machines lock for "multinode-093000", held for 18.3305ms
	W1204 15:31:57.251138    9362 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-093000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-093000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:31:57.258759    9362 out.go:201] 
	W1204 15:31:57.262890    9362 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 15:31:57.262948    9362 out.go:270] * 
	* 
	W1204 15:31:57.265306    9362 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 15:31:57.273772    9362 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-093000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-093000 -n multinode-093000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-093000 -n multinode-093000: exit status 7 (73.783083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-093000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.27s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-093000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-093000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-093000-m01 --driver=qemu2 : exit status 80 (9.992236667s)

                                                
                                                
-- stdout --
	* [multinode-093000-m01] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-093000-m01" primary control-plane node in "multinode-093000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-093000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-093000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-093000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-093000-m02 --driver=qemu2 : exit status 80 (10.097316417s)

                                                
                                                
-- stdout --
	* [multinode-093000-m02] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-093000-m02" primary control-plane node in "multinode-093000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-093000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-093000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-093000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-093000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-093000: exit status 83 (70.137958ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-093000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-093000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-093000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-093000 -n multinode-093000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-093000 -n multinode-093000: exit status 7 (35.441708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-093000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.31s)

                                                
                                    
x
+
TestPreload (10.06s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-807000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-807000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.902615833s)

                                                
                                                
-- stdout --
	* [test-preload-807000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-807000" primary control-plane node in "test-preload-807000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-807000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:32:17.834001    9434 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:32:17.834199    9434 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:32:17.834202    9434 out.go:358] Setting ErrFile to fd 2...
	I1204 15:32:17.834204    9434 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:32:17.834340    9434 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:32:17.835526    9434 out.go:352] Setting JSON to false
	I1204 15:32:17.853352    9434 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5507,"bootTime":1733349630,"procs":542,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 15:32:17.853422    9434 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:32:17.859772    9434 out.go:177] * [test-preload-807000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 15:32:17.867749    9434 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 15:32:17.867811    9434 notify.go:220] Checking for updates...
	I1204 15:32:17.874648    9434 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	I1204 15:32:17.877669    9434 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 15:32:17.882439    9434 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:32:17.886505    9434 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	I1204 15:32:17.889769    9434 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 15:32:17.893096    9434 config.go:182] Loaded profile config "multinode-093000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:32:17.893159    9434 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:32:17.896635    9434 out.go:177] * Using the qemu2 driver based on user configuration
	I1204 15:32:17.903744    9434 start.go:297] selected driver: qemu2
	I1204 15:32:17.903751    9434 start.go:901] validating driver "qemu2" against <nil>
	I1204 15:32:17.903758    9434 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 15:32:17.906321    9434 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 15:32:17.909657    9434 out.go:177] * Automatically selected the socket_vmnet network
	I1204 15:32:17.912777    9434 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 15:32:17.912800    9434 cni.go:84] Creating CNI manager for ""
	I1204 15:32:17.912833    9434 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 15:32:17.912841    9434 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1204 15:32:17.912877    9434 start.go:340] cluster config:
	{Name:test-preload-807000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-807000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:32:17.917527    9434 iso.go:125] acquiring lock: {Name:mk798ab7885b095925d55c843b0600e7ea181045 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:32:17.925689    9434 out.go:177] * Starting "test-preload-807000" primary control-plane node in "test-preload-807000" cluster
	I1204 15:32:17.929721    9434 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I1204 15:32:17.929832    9434 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/test-preload-807000/config.json ...
	I1204 15:32:17.929831    9434 cache.go:107] acquiring lock: {Name:mke9bfe86d065dcb91fa7a419ea8c05899d7cdd6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:32:17.929836    9434 cache.go:107] acquiring lock: {Name:mk99ac45a3499398882c1acdea599394246df3d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:32:17.929850    9434 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/test-preload-807000/config.json: {Name:mkf6e19572da16c343e6d4ec8c3b0725d10a4507 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:32:17.929849    9434 cache.go:107] acquiring lock: {Name:mk360448ed42d04116af351bd996dee1a68d5af7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:32:17.930053    9434 cache.go:107] acquiring lock: {Name:mkcd83a72c660b0d91ce5478794345b5a8a35a8b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:32:17.930050    9434 cache.go:107] acquiring lock: {Name:mk72d928851eb0d8bfa4a95e04bf3a5e3977ee92 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:32:17.930087    9434 cache.go:107] acquiring lock: {Name:mkdd6b9739f1ade39d34b706e76223dbadc4f07e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:32:17.930122    9434 cache.go:107] acquiring lock: {Name:mk30769aff3e8c3f747b27218829898ef88eb53d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:32:17.930128    9434 cache.go:107] acquiring lock: {Name:mkd4bccf52c006c6a330187d24a24866ad0b4841 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:32:17.930348    9434 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1204 15:32:17.930424    9434 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1204 15:32:17.930443    9434 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1204 15:32:17.930420    9434 start.go:360] acquireMachinesLock for test-preload-807000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:32:17.930522    9434 start.go:364] duration metric: took 63.833µs to acquireMachinesLock for "test-preload-807000"
	I1204 15:32:17.930641    9434 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 15:32:17.930683    9434 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1204 15:32:17.930688    9434 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1204 15:32:17.930540    9434 start.go:93] Provisioning new machine with config: &{Name:test-preload-807000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-807000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:32:17.930711    9434 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1204 15:32:17.930713    9434 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 15:32:17.930746    9434 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1204 15:32:17.937697    9434 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 15:32:17.942230    9434 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1204 15:32:17.942249    9434 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1204 15:32:17.942287    9434 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1204 15:32:17.942355    9434 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1204 15:32:17.942587    9434 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1204 15:32:17.942683    9434 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1204 15:32:17.943061    9434 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1204 15:32:17.943089    9434 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 15:32:17.956186    9434 start.go:159] libmachine.API.Create for "test-preload-807000" (driver="qemu2")
	I1204 15:32:17.956208    9434 client.go:168] LocalClient.Create starting
	I1204 15:32:17.956316    9434 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem
	I1204 15:32:17.956354    9434 main.go:141] libmachine: Decoding PEM data...
	I1204 15:32:17.956366    9434 main.go:141] libmachine: Parsing certificate...
	I1204 15:32:17.956401    9434 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem
	I1204 15:32:17.956432    9434 main.go:141] libmachine: Decoding PEM data...
	I1204 15:32:17.956441    9434 main.go:141] libmachine: Parsing certificate...
	I1204 15:32:17.956838    9434 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-6982/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 15:32:18.131146    9434 main.go:141] libmachine: Creating SSH key...
	I1204 15:32:18.242836    9434 main.go:141] libmachine: Creating Disk image...
	I1204 15:32:18.242852    9434 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 15:32:18.243107    9434 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/test-preload-807000/disk.qcow2.raw /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/test-preload-807000/disk.qcow2
	I1204 15:32:18.254060    9434 main.go:141] libmachine: STDOUT: 
	I1204 15:32:18.254107    9434 main.go:141] libmachine: STDERR: 
	I1204 15:32:18.254166    9434 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/test-preload-807000/disk.qcow2 +20000M
	I1204 15:32:18.263846    9434 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 15:32:18.263873    9434 main.go:141] libmachine: STDERR: 
	I1204 15:32:18.263902    9434 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/test-preload-807000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/test-preload-807000/disk.qcow2
	I1204 15:32:18.263907    9434 main.go:141] libmachine: Starting QEMU VM...
	I1204 15:32:18.263918    9434 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:32:18.263966    9434 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/test-preload-807000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/test-preload-807000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/test-preload-807000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:2b:2a:2a:52:f3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/test-preload-807000/disk.qcow2
	I1204 15:32:18.266061    9434 main.go:141] libmachine: STDOUT: 
	I1204 15:32:18.266074    9434 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:32:18.266103    9434 client.go:171] duration metric: took 309.886666ms to LocalClient.Create
	I1204 15:32:18.371228    9434 cache.go:162] opening:  /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I1204 15:32:18.424140    9434 cache.go:162] opening:  /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1204 15:32:18.424693    9434 cache.go:162] opening:  /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I1204 15:32:18.468555    9434 cache.go:162] opening:  /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I1204 15:32:18.585948    9434 cache.go:162] opening:  /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1204 15:32:18.670777    9434 cache.go:162] opening:  /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	W1204 15:32:18.685274    9434 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1204 15:32:18.685312    9434 cache.go:162] opening:  /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1204 15:32:18.816610    9434 cache.go:157] /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I1204 15:32:18.816658    9434 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 886.655125ms
	I1204 15:32:18.816705    9434 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W1204 15:32:19.384922    9434 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1204 15:32:19.385040    9434 cache.go:162] opening:  /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1204 15:32:19.914547    9434 cache.go:157] /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1204 15:32:19.914602    9434 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.9847585s
	I1204 15:32:19.914651    9434 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1204 15:32:20.165181    9434 cache.go:157] /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I1204 15:32:20.165238    9434 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.235093208s
	I1204 15:32:20.165267    9434 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I1204 15:32:20.266549    9434 start.go:128] duration metric: took 2.335756583s to createHost
	I1204 15:32:20.266650    9434 start.go:83] releasing machines lock for "test-preload-807000", held for 2.336092667s
	W1204 15:32:20.266708    9434 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:32:20.285333    9434 out.go:177] * Deleting "test-preload-807000" in qemu2 ...
	W1204 15:32:20.319038    9434 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:32:20.319072    9434 start.go:729] Will try again in 5 seconds ...
	I1204 15:32:21.788862    9434 cache.go:157] /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I1204 15:32:21.788920    9434 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 3.858894042s
	I1204 15:32:21.788975    9434 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I1204 15:32:22.504772    9434 cache.go:157] /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I1204 15:32:22.504827    9434 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 4.574938875s
	I1204 15:32:22.504857    9434 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I1204 15:32:23.173517    9434 cache.go:157] /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I1204 15:32:23.173569    9434 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 5.243461292s
	I1204 15:32:23.173594    9434 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I1204 15:32:24.231127    9434 cache.go:157] /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I1204 15:32:24.231180    9434 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 6.301287084s
	I1204 15:32:24.231206    9434 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I1204 15:32:25.319497    9434 start.go:360] acquireMachinesLock for test-preload-807000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:32:25.320072    9434 start.go:364] duration metric: took 488.583µs to acquireMachinesLock for "test-preload-807000"
	I1204 15:32:25.320205    9434 start.go:93] Provisioning new machine with config: &{Name:test-preload-807000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-807000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:32:25.320416    9434 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 15:32:25.327213    9434 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 15:32:25.376180    9434 start.go:159] libmachine.API.Create for "test-preload-807000" (driver="qemu2")
	I1204 15:32:25.376247    9434 client.go:168] LocalClient.Create starting
	I1204 15:32:25.376455    9434 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem
	I1204 15:32:25.376562    9434 main.go:141] libmachine: Decoding PEM data...
	I1204 15:32:25.376584    9434 main.go:141] libmachine: Parsing certificate...
	I1204 15:32:25.376666    9434 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem
	I1204 15:32:25.376731    9434 main.go:141] libmachine: Decoding PEM data...
	I1204 15:32:25.376752    9434 main.go:141] libmachine: Parsing certificate...
	I1204 15:32:25.377371    9434 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-6982/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 15:32:25.546775    9434 main.go:141] libmachine: Creating SSH key...
	I1204 15:32:25.628348    9434 main.go:141] libmachine: Creating Disk image...
	I1204 15:32:25.628354    9434 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 15:32:25.628544    9434 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/test-preload-807000/disk.qcow2.raw /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/test-preload-807000/disk.qcow2
	I1204 15:32:25.638542    9434 main.go:141] libmachine: STDOUT: 
	I1204 15:32:25.638558    9434 main.go:141] libmachine: STDERR: 
	I1204 15:32:25.638631    9434 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/test-preload-807000/disk.qcow2 +20000M
	I1204 15:32:25.647398    9434 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 15:32:25.647414    9434 main.go:141] libmachine: STDERR: 
	I1204 15:32:25.647426    9434 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/test-preload-807000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/test-preload-807000/disk.qcow2
	I1204 15:32:25.647432    9434 main.go:141] libmachine: Starting QEMU VM...
	I1204 15:32:25.647438    9434 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:32:25.647481    9434 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/test-preload-807000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/test-preload-807000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/test-preload-807000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:d0:cc:d0:42:6d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/test-preload-807000/disk.qcow2
	I1204 15:32:25.649550    9434 main.go:141] libmachine: STDOUT: 
	I1204 15:32:25.649569    9434 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:32:25.649583    9434 client.go:171] duration metric: took 273.315208ms to LocalClient.Create
	I1204 15:32:27.650245    9434 start.go:128] duration metric: took 2.329734834s to createHost
	I1204 15:32:27.650333    9434 start.go:83] releasing machines lock for "test-preload-807000", held for 2.330214917s
	W1204 15:32:27.650700    9434 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-807000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-807000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:32:27.668308    9434 out.go:201] 
	W1204 15:32:27.672345    9434 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 15:32:27.672387    9434 out.go:270] * 
	* 
	W1204 15:32:27.675017    9434 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 15:32:27.688127    9434 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-807000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:629: *** TestPreload FAILED at 2024-12-04 15:32:27.705915 -0800 PST m=+692.733456918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-807000 -n test-preload-807000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-807000 -n test-preload-807000: exit status 7 (74.689375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-807000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-807000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-807000
--- FAIL: TestPreload (10.06s)

                                                
                                    
x
+
TestScheduledStopUnix (10.09s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-730000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-730000 --memory=2048 --driver=qemu2 : exit status 80 (9.924422125s)

                                                
                                                
-- stdout --
	* [scheduled-stop-730000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-730000" primary control-plane node in "scheduled-stop-730000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-730000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-730000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-730000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-730000" primary control-plane node in "scheduled-stop-730000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-730000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-730000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-12-04 15:32:37.79114 -0800 PST m=+702.818588709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-730000 -n scheduled-stop-730000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-730000 -n scheduled-stop-730000: exit status 7 (75.292583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-730000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-730000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-730000
--- FAIL: TestScheduledStopUnix (10.09s)

                                                
                                    
x
+
TestSkaffold (12.55s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe456421967 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe456421967 version: (1.019776375s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-595000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-595000 --memory=2600 --driver=qemu2 : exit status 80 (9.881180708s)

                                                
                                                
-- stdout --
	* [skaffold-595000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-595000" primary control-plane node in "skaffold-595000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-595000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-595000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-595000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-595000" primary control-plane node in "skaffold-595000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-595000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-595000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestSkaffold FAILED at 2024-12-04 15:32:50.34257 -0800 PST m=+715.369903793
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-595000 -n skaffold-595000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-595000 -n skaffold-595000: exit status 7 (69.689792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-595000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-595000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-595000
--- FAIL: TestSkaffold (12.55s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (590.27s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1735120051 start -p running-upgrade-084000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.1735120051 start -p running-upgrade-084000 --memory=2200 --vm-driver=qemu2 : (53.383515125s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-084000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-084000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m23.063853083s)

                                                
                                                
-- stdout --
	* [running-upgrade-084000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-084000" primary control-plane node in "running-upgrade-084000" cluster
	* Updating the running qemu2 "running-upgrade-084000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:34:26.797893    9926 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:34:26.798311    9926 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:34:26.798315    9926 out.go:358] Setting ErrFile to fd 2...
	I1204 15:34:26.798317    9926 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:34:26.798485    9926 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:34:26.799691    9926 out.go:352] Setting JSON to false
	I1204 15:34:26.819155    9926 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5636,"bootTime":1733349630,"procs":549,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 15:34:26.819222    9926 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:34:26.823923    9926 out.go:177] * [running-upgrade-084000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 15:34:26.830870    9926 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 15:34:26.830915    9926 notify.go:220] Checking for updates...
	I1204 15:34:26.838799    9926 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	I1204 15:34:26.842881    9926 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 15:34:26.844013    9926 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:34:26.846904    9926 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	I1204 15:34:26.849864    9926 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 15:34:26.853201    9926 config.go:182] Loaded profile config "running-upgrade-084000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1204 15:34:26.856762    9926 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1204 15:34:26.859869    9926 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:34:26.863715    9926 out.go:177] * Using the qemu2 driver based on existing profile
	I1204 15:34:26.870839    9926 start.go:297] selected driver: qemu2
	I1204 15:34:26.870844    9926 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-084000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:61592 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-084000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1204 15:34:26.870884    9926 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 15:34:26.873658    9926 cni.go:84] Creating CNI manager for ""
	I1204 15:34:26.873691    9926 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 15:34:26.873716    9926 start.go:340] cluster config:
	{Name:running-upgrade-084000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:61592 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-084000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1204 15:34:26.873773    9926 iso.go:125] acquiring lock: {Name:mk798ab7885b095925d55c843b0600e7ea181045 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:34:26.882948    9926 out.go:177] * Starting "running-upgrade-084000" primary control-plane node in "running-upgrade-084000" cluster
	I1204 15:34:26.886870    9926 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1204 15:34:26.886886    9926 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1204 15:34:26.886894    9926 cache.go:56] Caching tarball of preloaded images
	I1204 15:34:26.886969    9926 preload.go:172] Found /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 15:34:26.886974    9926 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1204 15:34:26.887032    9926 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/running-upgrade-084000/config.json ...
	I1204 15:34:26.887554    9926 start.go:360] acquireMachinesLock for running-upgrade-084000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:34:26.887581    9926 start.go:364] duration metric: took 21.834µs to acquireMachinesLock for "running-upgrade-084000"
	I1204 15:34:26.887589    9926 start.go:96] Skipping create...Using existing machine configuration
	I1204 15:34:26.887593    9926 fix.go:54] fixHost starting: 
	I1204 15:34:26.888217    9926 fix.go:112] recreateIfNeeded on running-upgrade-084000: state=Running err=<nil>
	W1204 15:34:26.888228    9926 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 15:34:26.892863    9926 out.go:177] * Updating the running qemu2 "running-upgrade-084000" VM ...
	I1204 15:34:26.900818    9926 machine.go:93] provisionDockerMachine start ...
	I1204 15:34:26.900862    9926 main.go:141] libmachine: Using SSH client type: native
	I1204 15:34:26.900988    9926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d12f60] 0x104d157a0 <nil>  [] 0s} localhost 61560 <nil> <nil>}
	I1204 15:34:26.900996    9926 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 15:34:26.963392    9926 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-084000
	
	I1204 15:34:26.963409    9926 buildroot.go:166] provisioning hostname "running-upgrade-084000"
	I1204 15:34:26.963479    9926 main.go:141] libmachine: Using SSH client type: native
	I1204 15:34:26.963599    9926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d12f60] 0x104d157a0 <nil>  [] 0s} localhost 61560 <nil> <nil>}
	I1204 15:34:26.963605    9926 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-084000 && echo "running-upgrade-084000" | sudo tee /etc/hostname
	I1204 15:34:27.028526    9926 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-084000
	
	I1204 15:34:27.028583    9926 main.go:141] libmachine: Using SSH client type: native
	I1204 15:34:27.028691    9926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d12f60] 0x104d157a0 <nil>  [] 0s} localhost 61560 <nil> <nil>}
	I1204 15:34:27.028700    9926 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-084000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-084000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-084000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 15:34:27.088329    9926 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 15:34:27.088340    9926 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20045-6982/.minikube CaCertPath:/Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20045-6982/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20045-6982/.minikube}
	I1204 15:34:27.088346    9926 buildroot.go:174] setting up certificates
	I1204 15:34:27.088350    9926 provision.go:84] configureAuth start
	I1204 15:34:27.088357    9926 provision.go:143] copyHostCerts
	I1204 15:34:27.088419    9926 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-6982/.minikube/key.pem, removing ...
	I1204 15:34:27.088439    9926 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-6982/.minikube/key.pem
	I1204 15:34:27.088555    9926 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20045-6982/.minikube/key.pem (1679 bytes)
	I1204 15:34:27.088746    9926 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-6982/.minikube/ca.pem, removing ...
	I1204 15:34:27.088749    9926 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-6982/.minikube/ca.pem
	I1204 15:34:27.088798    9926 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20045-6982/.minikube/ca.pem (1078 bytes)
	I1204 15:34:27.088909    9926 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-6982/.minikube/cert.pem, removing ...
	I1204 15:34:27.088912    9926 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-6982/.minikube/cert.pem
	I1204 15:34:27.088950    9926 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20045-6982/.minikube/cert.pem (1123 bytes)
	I1204 15:34:27.089061    9926 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-084000 san=[127.0.0.1 localhost minikube running-upgrade-084000]
	I1204 15:34:27.151646    9926 provision.go:177] copyRemoteCerts
	I1204 15:34:27.151695    9926 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 15:34:27.151703    9926 sshutil.go:53] new ssh client: &{IP:localhost Port:61560 SSHKeyPath:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/running-upgrade-084000/id_rsa Username:docker}
	I1204 15:34:27.183752    9926 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 15:34:27.190893    9926 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1204 15:34:27.197804    9926 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1204 15:34:27.204659    9926 provision.go:87] duration metric: took 116.29925ms to configureAuth
	I1204 15:34:27.204668    9926 buildroot.go:189] setting minikube options for container-runtime
	I1204 15:34:27.204781    9926 config.go:182] Loaded profile config "running-upgrade-084000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1204 15:34:27.204826    9926 main.go:141] libmachine: Using SSH client type: native
	I1204 15:34:27.204909    9926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d12f60] 0x104d157a0 <nil>  [] 0s} localhost 61560 <nil> <nil>}
	I1204 15:34:27.204913    9926 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1204 15:34:27.267684    9926 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1204 15:34:27.267694    9926 buildroot.go:70] root file system type: tmpfs
	I1204 15:34:27.267744    9926 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1204 15:34:27.267817    9926 main.go:141] libmachine: Using SSH client type: native
	I1204 15:34:27.267928    9926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d12f60] 0x104d157a0 <nil>  [] 0s} localhost 61560 <nil> <nil>}
	I1204 15:34:27.267961    9926 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1204 15:34:27.335977    9926 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1204 15:34:27.336038    9926 main.go:141] libmachine: Using SSH client type: native
	I1204 15:34:27.336146    9926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d12f60] 0x104d157a0 <nil>  [] 0s} localhost 61560 <nil> <nil>}
	I1204 15:34:27.336155    9926 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1204 15:34:27.399044    9926 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 15:34:27.399055    9926 machine.go:96] duration metric: took 498.2265ms to provisionDockerMachine
	I1204 15:34:27.399060    9926 start.go:293] postStartSetup for "running-upgrade-084000" (driver="qemu2")
	I1204 15:34:27.399067    9926 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 15:34:27.399128    9926 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 15:34:27.399137    9926 sshutil.go:53] new ssh client: &{IP:localhost Port:61560 SSHKeyPath:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/running-upgrade-084000/id_rsa Username:docker}
	I1204 15:34:27.434623    9926 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 15:34:27.436024    9926 info.go:137] Remote host: Buildroot 2021.02.12
	I1204 15:34:27.436031    9926 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20045-6982/.minikube/addons for local assets ...
	I1204 15:34:27.436105    9926 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20045-6982/.minikube/files for local assets ...
	I1204 15:34:27.436197    9926 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20045-6982/.minikube/files/etc/ssl/certs/74952.pem -> 74952.pem in /etc/ssl/certs
	I1204 15:34:27.436300    9926 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 15:34:27.440088    9926 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/files/etc/ssl/certs/74952.pem --> /etc/ssl/certs/74952.pem (1708 bytes)
	I1204 15:34:27.449642    9926 start.go:296] duration metric: took 50.572125ms for postStartSetup
	I1204 15:34:27.449659    9926 fix.go:56] duration metric: took 562.060542ms for fixHost
	I1204 15:34:27.449722    9926 main.go:141] libmachine: Using SSH client type: native
	I1204 15:34:27.449838    9926 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104d12f60] 0x104d157a0 <nil>  [] 0s} localhost 61560 <nil> <nil>}
	I1204 15:34:27.449843    9926 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 15:34:27.511936    9926 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733355267.352677222
	
	I1204 15:34:27.511945    9926 fix.go:216] guest clock: 1733355267.352677222
	I1204 15:34:27.511949    9926 fix.go:229] Guest: 2024-12-04 15:34:27.352677222 -0800 PST Remote: 2024-12-04 15:34:27.449661 -0800 PST m=+0.674109043 (delta=-96.983778ms)
	I1204 15:34:27.511967    9926 fix.go:200] guest clock delta is within tolerance: -96.983778ms
	I1204 15:34:27.511973    9926 start.go:83] releasing machines lock for "running-upgrade-084000", held for 624.379917ms
	I1204 15:34:27.512058    9926 ssh_runner.go:195] Run: cat /version.json
	I1204 15:34:27.512067    9926 sshutil.go:53] new ssh client: &{IP:localhost Port:61560 SSHKeyPath:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/running-upgrade-084000/id_rsa Username:docker}
	I1204 15:34:27.512070    9926 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 15:34:27.512087    9926 sshutil.go:53] new ssh client: &{IP:localhost Port:61560 SSHKeyPath:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/running-upgrade-084000/id_rsa Username:docker}
	W1204 15:34:27.513539    9926 sshutil.go:64] dial failure (will retry): dial tcp [::1]:61560: connect: connection refused
	I1204 15:34:27.513562    9926 retry.go:31] will retry after 220.988968ms: dial tcp [::1]:61560: connect: connection refused
	W1204 15:34:27.545080    9926 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1204 15:34:27.545151    9926 ssh_runner.go:195] Run: systemctl --version
	I1204 15:34:27.547037    9926 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 15:34:27.548680    9926 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 15:34:27.548718    9926 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1204 15:34:27.551464    9926 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1204 15:34:27.555654    9926 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 15:34:27.555662    9926 start.go:495] detecting cgroup driver to use...
	I1204 15:34:27.555777    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 15:34:27.561526    9926 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1204 15:34:27.564400    9926 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1204 15:34:27.567651    9926 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1204 15:34:27.567690    9926 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1204 15:34:27.570912    9926 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1204 15:34:27.574581    9926 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1204 15:34:27.577964    9926 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1204 15:34:27.580873    9926 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 15:34:27.583796    9926 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1204 15:34:27.587143    9926 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1204 15:34:27.590487    9926 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1204 15:34:27.593718    9926 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 15:34:27.596380    9926 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 15:34:27.599399    9926 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:34:27.686897    9926 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1204 15:34:27.698066    9926 start.go:495] detecting cgroup driver to use...
	I1204 15:34:27.698165    9926 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1204 15:34:27.703604    9926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 15:34:27.713192    9926 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 15:34:27.722593    9926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 15:34:27.727416    9926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1204 15:34:27.732435    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 15:34:27.737778    9926 ssh_runner.go:195] Run: which cri-dockerd
	I1204 15:34:27.739364    9926 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1204 15:34:27.742129    9926 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1204 15:34:27.753265    9926 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1204 15:34:27.833539    9926 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1204 15:34:27.907703    9926 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1204 15:34:27.907760    9926 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1204 15:34:27.913292    9926 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:34:27.996361    9926 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1204 15:34:30.856237    9926 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.859831208s)
	I1204 15:34:30.856318    9926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1204 15:34:30.863354    9926 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1204 15:34:30.869733    9926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1204 15:34:30.874453    9926 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1204 15:34:30.937488    9926 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1204 15:34:31.004130    9926 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:34:31.074090    9926 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1204 15:34:31.080315    9926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1204 15:34:31.084955    9926 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:34:31.151073    9926 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1204 15:34:31.189533    9926 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1204 15:34:31.189639    9926 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1204 15:34:31.191730    9926 start.go:563] Will wait 60s for crictl version
	I1204 15:34:31.191786    9926 ssh_runner.go:195] Run: which crictl
	I1204 15:34:31.193338    9926 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 15:34:31.204771    9926 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1204 15:34:31.204852    9926 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1204 15:34:31.217301    9926 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1204 15:34:31.241684    9926 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1204 15:34:31.241773    9926 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1204 15:34:31.243096    9926 kubeadm.go:883] updating cluster {Name:running-upgrade-084000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:61592 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-084000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1204 15:34:31.243148    9926 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1204 15:34:31.243195    9926 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1204 15:34:31.253677    9926 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1204 15:34:31.253689    9926 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1204 15:34:31.253754    9926 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1204 15:34:31.256717    9926 ssh_runner.go:195] Run: which lz4
	I1204 15:34:31.257953    9926 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1204 15:34:31.259134    9926 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1204 15:34:31.259143    9926 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1204 15:34:32.240705    9926 docker.go:653] duration metric: took 982.785209ms to copy over tarball
	I1204 15:34:32.240795    9926 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1204 15:34:33.377952    9926 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.137133375s)
	I1204 15:34:33.377967    9926 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1204 15:34:33.394061    9926 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1204 15:34:33.397174    9926 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1204 15:34:33.402143    9926 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:34:33.465337    9926 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1204 15:34:34.628656    9926 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.163292583s)
	I1204 15:34:34.628771    9926 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1204 15:34:34.642240    9926 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1204 15:34:34.642251    9926 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1204 15:34:34.642256    9926 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1204 15:34:34.646802    9926 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 15:34:34.649230    9926 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1204 15:34:34.651531    9926 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 15:34:34.651828    9926 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1204 15:34:34.653140    9926 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1204 15:34:34.653475    9926 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1204 15:34:34.654509    9926 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1204 15:34:34.654529    9926 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1204 15:34:34.655767    9926 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1204 15:34:34.657543    9926 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1204 15:34:34.657537    9926 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1204 15:34:34.657660    9926 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1204 15:34:34.658807    9926 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1204 15:34:34.658841    9926 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1204 15:34:34.660126    9926 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1204 15:34:34.660867    9926 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1204 15:34:35.196874    9926 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1204 15:34:35.208172    9926 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1204 15:34:35.208202    9926 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1204 15:34:35.208260    9926 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1204 15:34:35.219794    9926 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1204 15:34:35.226713    9926 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1204 15:34:35.238342    9926 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1204 15:34:35.238377    9926 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1204 15:34:35.238454    9926 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1204 15:34:35.242741    9926 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1204 15:34:35.250940    9926 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1204 15:34:35.258676    9926 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1204 15:34:35.258699    9926 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1204 15:34:35.258774    9926 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1204 15:34:35.274990    9926 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1204 15:34:35.297169    9926 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1204 15:34:35.308099    9926 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1204 15:34:35.308126    9926 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1204 15:34:35.308184    9926 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1204 15:34:35.318256    9926 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1204 15:34:35.332206    9926 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1204 15:34:35.341862    9926 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1204 15:34:35.341885    9926 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1204 15:34:35.341947    9926 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1204 15:34:35.351548    9926 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1204 15:34:35.351699    9926 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1204 15:34:35.353468    9926 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1204 15:34:35.353482    9926 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1204 15:34:35.361906    9926 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1204 15:34:35.361918    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1204 15:34:35.390545    9926 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W1204 15:34:35.430930    9926 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1204 15:34:35.431163    9926 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1204 15:34:35.447823    9926 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1204 15:34:35.447848    9926 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1204 15:34:35.447912    9926 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1204 15:34:35.458554    9926 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1204 15:34:35.458705    9926 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1204 15:34:35.460291    9926 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1204 15:34:35.460300    9926 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1204 15:34:35.507728    9926 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1204 15:34:35.507742    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1204 15:34:35.528066    9926 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1204 15:34:35.559021    9926 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1204 15:34:35.559059    9926 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1204 15:34:35.559077    9926 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1204 15:34:35.559148    9926 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1204 15:34:35.573299    9926 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W1204 15:34:35.599760    9926 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1204 15:34:35.599872    9926 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 15:34:35.610646    9926 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1204 15:34:35.610669    9926 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 15:34:35.610736    9926 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 15:34:36.410561    9926 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1204 15:34:36.411153    9926 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1204 15:34:36.416149    9926 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1204 15:34:36.416192    9926 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1204 15:34:36.475679    9926 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1204 15:34:36.475694    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1204 15:34:36.713846    9926 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1204 15:34:36.713891    9926 cache_images.go:92] duration metric: took 2.071600459s to LoadCachedImages
	W1204 15:34:36.713938    9926 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I1204 15:34:36.713943    9926 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1204 15:34:36.714000    9926 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-084000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-084000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 15:34:36.714075    9926 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1204 15:34:36.729649    9926 cni.go:84] Creating CNI manager for ""
	I1204 15:34:36.729660    9926 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 15:34:36.729672    9926 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 15:34:36.729680    9926 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-084000 NodeName:running-upgrade-084000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 15:34:36.729755    9926 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-084000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 15:34:36.729840    9926 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1204 15:34:36.732582    9926 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 15:34:36.732620    9926 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 15:34:36.735548    9926 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1204 15:34:36.740855    9926 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 15:34:36.745717    9926 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1204 15:34:36.751340    9926 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1204 15:34:36.752687    9926 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:34:36.808676    9926 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 15:34:36.813515    9926 certs.go:68] Setting up /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/running-upgrade-084000 for IP: 10.0.2.15
	I1204 15:34:36.813522    9926 certs.go:194] generating shared ca certs ...
	I1204 15:34:36.813531    9926 certs.go:226] acquiring lock for ca certs: {Name:mkc3a39b491c90031583eb49eb548c7e4c1f6091 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:34:36.813793    9926 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/20045-6982/.minikube/ca.key
	I1204 15:34:36.813856    9926 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/20045-6982/.minikube/proxy-client-ca.key
	I1204 15:34:36.813861    9926 certs.go:256] generating profile certs ...
	I1204 15:34:36.813946    9926 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/running-upgrade-084000/client.key
	I1204 15:34:36.813959    9926 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/running-upgrade-084000/apiserver.key.fbb90fd9
	I1204 15:34:36.813968    9926 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/running-upgrade-084000/apiserver.crt.fbb90fd9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1204 15:34:36.910146    9926 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/running-upgrade-084000/apiserver.crt.fbb90fd9 ...
	I1204 15:34:36.910152    9926 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/running-upgrade-084000/apiserver.crt.fbb90fd9: {Name:mk9ecac41aef357ff3d7612dd5242a3ea4aa5477 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:34:36.910405    9926 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/running-upgrade-084000/apiserver.key.fbb90fd9 ...
	I1204 15:34:36.910409    9926 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/running-upgrade-084000/apiserver.key.fbb90fd9: {Name:mke6d8fcaf155b895874026f0c5be07ec422b480 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:34:36.910558    9926 certs.go:381] copying /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/running-upgrade-084000/apiserver.crt.fbb90fd9 -> /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/running-upgrade-084000/apiserver.crt
	I1204 15:34:36.910681    9926 certs.go:385] copying /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/running-upgrade-084000/apiserver.key.fbb90fd9 -> /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/running-upgrade-084000/apiserver.key
	I1204 15:34:36.910835    9926 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/running-upgrade-084000/proxy-client.key
	I1204 15:34:36.910968    9926 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/7495.pem (1338 bytes)
	W1204 15:34:36.911002    9926 certs.go:480] ignoring /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/7495_empty.pem, impossibly tiny 0 bytes
	I1204 15:34:36.911007    9926 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca-key.pem (1675 bytes)
	I1204 15:34:36.911039    9926 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem (1078 bytes)
	I1204 15:34:36.911069    9926 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem (1123 bytes)
	I1204 15:34:36.911101    9926 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/key.pem (1679 bytes)
	I1204 15:34:36.911172    9926 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-6982/.minikube/files/etc/ssl/certs/74952.pem (1708 bytes)
	I1204 15:34:36.911581    9926 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 15:34:36.919401    9926 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1204 15:34:36.927138    9926 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 15:34:36.934868    9926 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 15:34:36.942522    9926 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/running-upgrade-084000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1204 15:34:36.949698    9926 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/running-upgrade-084000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 15:34:36.956938    9926 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/running-upgrade-084000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 15:34:36.963947    9926 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/running-upgrade-084000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 15:34:36.971306    9926 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/files/etc/ssl/certs/74952.pem --> /usr/share/ca-certificates/74952.pem (1708 bytes)
	I1204 15:34:36.978532    9926 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 15:34:36.985414    9926 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/7495.pem --> /usr/share/ca-certificates/7495.pem (1338 bytes)
	I1204 15:34:36.992347    9926 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 15:34:36.997505    9926 ssh_runner.go:195] Run: openssl version
	I1204 15:34:36.999359    9926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7495.pem && ln -fs /usr/share/ca-certificates/7495.pem /etc/ssl/certs/7495.pem"
	I1204 15:34:37.002865    9926 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7495.pem
	I1204 15:34:37.004432    9926 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 23:22 /usr/share/ca-certificates/7495.pem
	I1204 15:34:37.004461    9926 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7495.pem
	I1204 15:34:37.006492    9926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7495.pem /etc/ssl/certs/51391683.0"
	I1204 15:34:37.010489    9926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/74952.pem && ln -fs /usr/share/ca-certificates/74952.pem /etc/ssl/certs/74952.pem"
	I1204 15:34:37.013607    9926 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/74952.pem
	I1204 15:34:37.015121    9926 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 23:22 /usr/share/ca-certificates/74952.pem
	I1204 15:34:37.015147    9926 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/74952.pem
	I1204 15:34:37.017629    9926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/74952.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 15:34:37.020423    9926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 15:34:37.023795    9926 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:34:37.025468    9926 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:34:37.025489    9926 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:34:37.027495    9926 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 15:34:37.030861    9926 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 15:34:37.032539    9926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 15:34:37.034757    9926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 15:34:37.036806    9926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 15:34:37.038643    9926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 15:34:37.040778    9926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 15:34:37.042593    9926 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 15:34:37.044463    9926 kubeadm.go:392] StartCluster: {Name:running-upgrade-084000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:61592 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-084000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1204 15:34:37.044532    9926 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1204 15:34:37.055416    9926 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 15:34:37.059279    9926 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1204 15:34:37.059287    9926 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1204 15:34:37.059321    9926 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1204 15:34:37.063598    9926 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1204 15:34:37.063633    9926 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-084000" does not appear in /Users/jenkins/minikube-integration/20045-6982/kubeconfig
	I1204 15:34:37.063647    9926 kubeconfig.go:62] /Users/jenkins/minikube-integration/20045-6982/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-084000" cluster setting kubeconfig missing "running-upgrade-084000" context setting]
	I1204 15:34:37.063813    9926 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-6982/kubeconfig: {Name:mk101d59bd39dad79cc42c692d70ed55e90c94da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:34:37.064539    9926 kapi.go:59] client config for running-upgrade-084000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/running-upgrade-084000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/running-upgrade-084000/client.key", CAFile:"/Users/jenkins/minikube-integration/20045-6982/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10676f6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1204 15:34:37.065479    9926 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1204 15:34:37.068342    9926 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-084000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1204 15:34:37.068348    9926 kubeadm.go:1160] stopping kube-system containers ...
	I1204 15:34:37.068403    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1204 15:34:37.079365    9926 docker.go:483] Stopping containers: [9338a54fe287 d5e44f310972 45cacbed9679 dea543a983b2 e94aa66fc745 ca0c907ad43c 4b7baf9f2676 05c729291254 e4f90e1b9024 f18e443b8788 689cda8b11e3 4dd16fef9b36 d775246b71f1 137b9a41a534]
	I1204 15:34:37.079432    9926 ssh_runner.go:195] Run: docker stop 9338a54fe287 d5e44f310972 45cacbed9679 dea543a983b2 e94aa66fc745 ca0c907ad43c 4b7baf9f2676 05c729291254 e4f90e1b9024 f18e443b8788 689cda8b11e3 4dd16fef9b36 d775246b71f1 137b9a41a534
	I1204 15:34:37.091186    9926 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1204 15:34:37.183527    9926 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 15:34:37.187819    9926 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Dec  4 23:34 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Dec  4 23:34 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Dec  4 23:34 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Dec  4 23:34 /etc/kubernetes/scheduler.conf
	
	I1204 15:34:37.187873    9926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61592 /etc/kubernetes/admin.conf
	I1204 15:34:37.191350    9926 kubeadm.go:163] "https://control-plane.minikube.internal:61592" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:61592 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1204 15:34:37.191390    9926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 15:34:37.194727    9926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61592 /etc/kubernetes/kubelet.conf
	I1204 15:34:37.197804    9926 kubeadm.go:163] "https://control-plane.minikube.internal:61592" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:61592 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1204 15:34:37.197831    9926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 15:34:37.200621    9926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61592 /etc/kubernetes/controller-manager.conf
	I1204 15:34:37.203608    9926 kubeadm.go:163] "https://control-plane.minikube.internal:61592" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:61592 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1204 15:34:37.203630    9926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 15:34:37.206557    9926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61592 /etc/kubernetes/scheduler.conf
	I1204 15:34:37.209060    9926 kubeadm.go:163] "https://control-plane.minikube.internal:61592" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:61592 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1204 15:34:37.209087    9926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 15:34:37.211670    9926 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 15:34:37.214795    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 15:34:37.237356    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 15:34:37.794493    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1204 15:34:38.087923    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 15:34:38.114783    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1204 15:34:38.139960    9926 api_server.go:52] waiting for apiserver process to appear ...
	I1204 15:34:38.140062    9926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 15:34:38.642206    9926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 15:34:39.142123    9926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 15:34:39.642217    9926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 15:34:39.649443    9926 api_server.go:72] duration metric: took 1.509470417s to wait for apiserver process to appear ...
	I1204 15:34:39.649463    9926 api_server.go:88] waiting for apiserver healthz status ...
	I1204 15:34:39.649509    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:34:44.651238    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:34:44.651340    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:34:49.652119    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:34:49.652220    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:34:54.653208    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:34:54.653268    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:34:59.654066    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:34:59.654160    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:35:04.655701    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:35:04.655837    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:35:09.657708    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:35:09.657811    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:35:14.660165    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:35:14.660257    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:35:19.663066    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:35:19.663154    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:35:24.665841    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:35:24.665910    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:35:29.668555    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:35:29.668627    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:35:34.669923    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:35:34.670018    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:35:39.672705    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:35:39.673224    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:35:39.710873    9926 logs.go:282] 2 containers: [9bb28cca02e8 e4f90e1b9024]
	I1204 15:35:39.711040    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:35:39.734365    9926 logs.go:282] 2 containers: [a32a7806b496 ca0c907ad43c]
	I1204 15:35:39.734499    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:35:39.751589    9926 logs.go:282] 1 containers: [08e34a589c88]
	I1204 15:35:39.751684    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:35:39.763846    9926 logs.go:282] 2 containers: [7fbf81020cc8 f18e443b8788]
	I1204 15:35:39.763934    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:35:39.774733    9926 logs.go:282] 1 containers: [5620f978e468]
	I1204 15:35:39.774828    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:35:39.785998    9926 logs.go:282] 2 containers: [d8b20ca47793 e94aa66fc745]
	I1204 15:35:39.786068    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:35:39.801281    9926 logs.go:282] 0 containers: []
	W1204 15:35:39.801290    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:35:39.801351    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:35:39.812579    9926 logs.go:282] 0 containers: []
	W1204 15:35:39.812590    9926 logs.go:284] No container was found matching "storage-provisioner"
	I1204 15:35:39.812597    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:35:39.812603    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:35:39.885779    9926 logs.go:123] Gathering logs for etcd [a32a7806b496] ...
	I1204 15:35:39.885790    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32a7806b496"
	I1204 15:35:39.900146    9926 logs.go:123] Gathering logs for kube-proxy [5620f978e468] ...
	I1204 15:35:39.900160    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5620f978e468"
	I1204 15:35:39.912044    9926 logs.go:123] Gathering logs for kube-controller-manager [d8b20ca47793] ...
	I1204 15:35:39.912059    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8b20ca47793"
	I1204 15:35:39.929242    9926 logs.go:123] Gathering logs for kube-scheduler [f18e443b8788] ...
	I1204 15:35:39.929252    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f18e443b8788"
	I1204 15:35:39.945630    9926 logs.go:123] Gathering logs for kube-apiserver [9bb28cca02e8] ...
	I1204 15:35:39.945644    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb28cca02e8"
	I1204 15:35:39.959994    9926 logs.go:123] Gathering logs for kube-apiserver [e4f90e1b9024] ...
	I1204 15:35:39.960005    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4f90e1b9024"
	I1204 15:35:39.980251    9926 logs.go:123] Gathering logs for etcd [ca0c907ad43c] ...
	I1204 15:35:39.980263    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca0c907ad43c"
	I1204 15:35:40.003043    9926 logs.go:123] Gathering logs for coredns [08e34a589c88] ...
	I1204 15:35:40.003057    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e34a589c88"
	I1204 15:35:40.014555    9926 logs.go:123] Gathering logs for kube-scheduler [7fbf81020cc8] ...
	I1204 15:35:40.014566    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fbf81020cc8"
	I1204 15:35:40.030852    9926 logs.go:123] Gathering logs for kube-controller-manager [e94aa66fc745] ...
	I1204 15:35:40.030864    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94aa66fc745"
	I1204 15:35:40.042593    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:35:40.042606    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:35:40.054810    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:35:40.054822    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:35:40.093052    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:35:40.093062    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:35:40.097288    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:35:40.097298    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:35:42.624055    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:35:47.627020    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:35:47.627533    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:35:47.666810    9926 logs.go:282] 2 containers: [9bb28cca02e8 e4f90e1b9024]
	I1204 15:35:47.666964    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:35:47.688065    9926 logs.go:282] 2 containers: [a32a7806b496 ca0c907ad43c]
	I1204 15:35:47.688197    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:35:47.704864    9926 logs.go:282] 1 containers: [08e34a589c88]
	I1204 15:35:47.704949    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:35:47.717294    9926 logs.go:282] 2 containers: [7fbf81020cc8 f18e443b8788]
	I1204 15:35:47.717382    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:35:47.728993    9926 logs.go:282] 1 containers: [5620f978e468]
	I1204 15:35:47.729065    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:35:47.740591    9926 logs.go:282] 2 containers: [d8b20ca47793 e94aa66fc745]
	I1204 15:35:47.740668    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:35:47.751011    9926 logs.go:282] 0 containers: []
	W1204 15:35:47.751021    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:35:47.751086    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:35:47.761075    9926 logs.go:282] 0 containers: []
	W1204 15:35:47.761089    9926 logs.go:284] No container was found matching "storage-provisioner"
	I1204 15:35:47.761098    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:35:47.761104    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:35:47.765578    9926 logs.go:123] Gathering logs for etcd [a32a7806b496] ...
	I1204 15:35:47.765587    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32a7806b496"
	I1204 15:35:47.780287    9926 logs.go:123] Gathering logs for kube-proxy [5620f978e468] ...
	I1204 15:35:47.780300    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5620f978e468"
	I1204 15:35:47.794979    9926 logs.go:123] Gathering logs for kube-controller-manager [d8b20ca47793] ...
	I1204 15:35:47.794993    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8b20ca47793"
	I1204 15:35:47.813809    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:35:47.813818    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:35:47.850409    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:35:47.850421    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:35:47.887031    9926 logs.go:123] Gathering logs for kube-scheduler [7fbf81020cc8] ...
	I1204 15:35:47.887046    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fbf81020cc8"
	I1204 15:35:47.903730    9926 logs.go:123] Gathering logs for kube-controller-manager [e94aa66fc745] ...
	I1204 15:35:47.903741    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94aa66fc745"
	I1204 15:35:47.915144    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:35:47.915154    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:35:47.940226    9926 logs.go:123] Gathering logs for kube-scheduler [f18e443b8788] ...
	I1204 15:35:47.940237    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f18e443b8788"
	I1204 15:35:47.955050    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:35:47.955060    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:35:47.970798    9926 logs.go:123] Gathering logs for kube-apiserver [9bb28cca02e8] ...
	I1204 15:35:47.970811    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb28cca02e8"
	I1204 15:35:47.986211    9926 logs.go:123] Gathering logs for kube-apiserver [e4f90e1b9024] ...
	I1204 15:35:47.986222    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4f90e1b9024"
	I1204 15:35:48.005078    9926 logs.go:123] Gathering logs for etcd [ca0c907ad43c] ...
	I1204 15:35:48.005089    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca0c907ad43c"
	I1204 15:35:48.021832    9926 logs.go:123] Gathering logs for coredns [08e34a589c88] ...
	I1204 15:35:48.021844    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e34a589c88"
	I1204 15:35:50.535590    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:35:55.538589    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:35:55.539154    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:35:55.579161    9926 logs.go:282] 2 containers: [9bb28cca02e8 e4f90e1b9024]
	I1204 15:35:55.579316    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:35:55.601359    9926 logs.go:282] 2 containers: [a32a7806b496 ca0c907ad43c]
	I1204 15:35:55.601484    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:35:55.616909    9926 logs.go:282] 1 containers: [08e34a589c88]
	I1204 15:35:55.617003    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:35:55.629588    9926 logs.go:282] 2 containers: [7fbf81020cc8 f18e443b8788]
	I1204 15:35:55.629663    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:35:55.640762    9926 logs.go:282] 1 containers: [5620f978e468]
	I1204 15:35:55.640839    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:35:55.651351    9926 logs.go:282] 2 containers: [d8b20ca47793 e94aa66fc745]
	I1204 15:35:55.651435    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:35:55.661589    9926 logs.go:282] 0 containers: []
	W1204 15:35:55.661600    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:35:55.661660    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:35:55.671626    9926 logs.go:282] 0 containers: []
	W1204 15:35:55.671639    9926 logs.go:284] No container was found matching "storage-provisioner"
	I1204 15:35:55.671647    9926 logs.go:123] Gathering logs for kube-controller-manager [d8b20ca47793] ...
	I1204 15:35:55.671652    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8b20ca47793"
	I1204 15:35:55.688467    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:35:55.688476    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:35:55.714194    9926 logs.go:123] Gathering logs for kube-apiserver [9bb28cca02e8] ...
	I1204 15:35:55.714207    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb28cca02e8"
	I1204 15:35:55.728913    9926 logs.go:123] Gathering logs for kube-apiserver [e4f90e1b9024] ...
	I1204 15:35:55.728924    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4f90e1b9024"
	I1204 15:35:55.750879    9926 logs.go:123] Gathering logs for etcd [ca0c907ad43c] ...
	I1204 15:35:55.750891    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca0c907ad43c"
	I1204 15:35:55.769019    9926 logs.go:123] Gathering logs for kube-scheduler [f18e443b8788] ...
	I1204 15:35:55.769031    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f18e443b8788"
	I1204 15:35:55.784480    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:35:55.784492    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:35:55.821310    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:35:55.821322    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:35:55.825991    9926 logs.go:123] Gathering logs for etcd [a32a7806b496] ...
	I1204 15:35:55.825999    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32a7806b496"
	I1204 15:35:55.839763    9926 logs.go:123] Gathering logs for coredns [08e34a589c88] ...
	I1204 15:35:55.839773    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e34a589c88"
	I1204 15:35:55.850686    9926 logs.go:123] Gathering logs for kube-scheduler [7fbf81020cc8] ...
	I1204 15:35:55.850697    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fbf81020cc8"
	I1204 15:35:55.866937    9926 logs.go:123] Gathering logs for kube-proxy [5620f978e468] ...
	I1204 15:35:55.866949    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5620f978e468"
	I1204 15:35:55.878893    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:35:55.878907    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:35:55.892672    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:35:55.892686    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:35:55.929869    9926 logs.go:123] Gathering logs for kube-controller-manager [e94aa66fc745] ...
	I1204 15:35:55.929879    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94aa66fc745"
	I1204 15:35:58.443520    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:36:03.446048    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:36:03.446653    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:36:03.486501    9926 logs.go:282] 2 containers: [9bb28cca02e8 e4f90e1b9024]
	I1204 15:36:03.486655    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:36:03.506793    9926 logs.go:282] 2 containers: [a32a7806b496 ca0c907ad43c]
	I1204 15:36:03.506921    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:36:03.521919    9926 logs.go:282] 1 containers: [08e34a589c88]
	I1204 15:36:03.522014    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:36:03.534263    9926 logs.go:282] 2 containers: [7fbf81020cc8 f18e443b8788]
	I1204 15:36:03.534340    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:36:03.545173    9926 logs.go:282] 1 containers: [5620f978e468]
	I1204 15:36:03.545248    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:36:03.555641    9926 logs.go:282] 2 containers: [d8b20ca47793 e94aa66fc745]
	I1204 15:36:03.555722    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:36:03.565522    9926 logs.go:282] 0 containers: []
	W1204 15:36:03.565535    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:36:03.565605    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:36:03.579657    9926 logs.go:282] 0 containers: []
	W1204 15:36:03.579670    9926 logs.go:284] No container was found matching "storage-provisioner"
	I1204 15:36:03.579680    9926 logs.go:123] Gathering logs for kube-controller-manager [d8b20ca47793] ...
	I1204 15:36:03.579685    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8b20ca47793"
	I1204 15:36:03.597440    9926 logs.go:123] Gathering logs for kube-controller-manager [e94aa66fc745] ...
	I1204 15:36:03.597451    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94aa66fc745"
	I1204 15:36:03.609225    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:36:03.609237    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:36:03.646103    9926 logs.go:123] Gathering logs for kube-apiserver [e4f90e1b9024] ...
	I1204 15:36:03.646113    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4f90e1b9024"
	I1204 15:36:03.665084    9926 logs.go:123] Gathering logs for etcd [ca0c907ad43c] ...
	I1204 15:36:03.665094    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca0c907ad43c"
	I1204 15:36:03.690851    9926 logs.go:123] Gathering logs for kube-scheduler [f18e443b8788] ...
	I1204 15:36:03.690864    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f18e443b8788"
	I1204 15:36:03.706371    9926 logs.go:123] Gathering logs for kube-apiserver [9bb28cca02e8] ...
	I1204 15:36:03.706384    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb28cca02e8"
	I1204 15:36:03.720322    9926 logs.go:123] Gathering logs for kube-scheduler [7fbf81020cc8] ...
	I1204 15:36:03.720334    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fbf81020cc8"
	I1204 15:36:03.736312    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:36:03.736323    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:36:03.748936    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:36:03.748950    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:36:03.784867    9926 logs.go:123] Gathering logs for etcd [a32a7806b496] ...
	I1204 15:36:03.784879    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32a7806b496"
	I1204 15:36:03.798818    9926 logs.go:123] Gathering logs for coredns [08e34a589c88] ...
	I1204 15:36:03.798831    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e34a589c88"
	I1204 15:36:03.810558    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:36:03.810568    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:36:03.814881    9926 logs.go:123] Gathering logs for kube-proxy [5620f978e468] ...
	I1204 15:36:03.814888    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5620f978e468"
	I1204 15:36:03.828905    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:36:03.828918    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:36:06.357249    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:36:11.360132    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:36:11.360646    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:36:11.400919    9926 logs.go:282] 2 containers: [9bb28cca02e8 e4f90e1b9024]
	I1204 15:36:11.401055    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:36:11.423564    9926 logs.go:282] 2 containers: [a32a7806b496 ca0c907ad43c]
	I1204 15:36:11.423685    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:36:11.438968    9926 logs.go:282] 1 containers: [08e34a589c88]
	I1204 15:36:11.439052    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:36:11.451048    9926 logs.go:282] 2 containers: [7fbf81020cc8 f18e443b8788]
	I1204 15:36:11.451129    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:36:11.462249    9926 logs.go:282] 1 containers: [5620f978e468]
	I1204 15:36:11.462326    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:36:11.472847    9926 logs.go:282] 2 containers: [d8b20ca47793 e94aa66fc745]
	I1204 15:36:11.472918    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:36:11.483270    9926 logs.go:282] 0 containers: []
	W1204 15:36:11.483280    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:36:11.483335    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:36:11.494299    9926 logs.go:282] 0 containers: []
	W1204 15:36:11.494313    9926 logs.go:284] No container was found matching "storage-provisioner"
	I1204 15:36:11.494320    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:36:11.494325    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:36:11.498941    9926 logs.go:123] Gathering logs for coredns [08e34a589c88] ...
	I1204 15:36:11.498951    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e34a589c88"
	I1204 15:36:11.514217    9926 logs.go:123] Gathering logs for kube-scheduler [f18e443b8788] ...
	I1204 15:36:11.514228    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f18e443b8788"
	I1204 15:36:11.530135    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:36:11.530147    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:36:11.567083    9926 logs.go:123] Gathering logs for etcd [ca0c907ad43c] ...
	I1204 15:36:11.567101    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca0c907ad43c"
	I1204 15:36:11.584437    9926 logs.go:123] Gathering logs for kube-controller-manager [d8b20ca47793] ...
	I1204 15:36:11.584452    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8b20ca47793"
	I1204 15:36:11.601638    9926 logs.go:123] Gathering logs for kube-controller-manager [e94aa66fc745] ...
	I1204 15:36:11.601649    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94aa66fc745"
	I1204 15:36:11.616869    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:36:11.616880    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:36:11.641534    9926 logs.go:123] Gathering logs for kube-apiserver [9bb28cca02e8] ...
	I1204 15:36:11.641545    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb28cca02e8"
	I1204 15:36:11.655557    9926 logs.go:123] Gathering logs for etcd [a32a7806b496] ...
	I1204 15:36:11.655570    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32a7806b496"
	I1204 15:36:11.669093    9926 logs.go:123] Gathering logs for kube-scheduler [7fbf81020cc8] ...
	I1204 15:36:11.669107    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fbf81020cc8"
	I1204 15:36:11.692539    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:36:11.692550    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:36:11.711330    9926 logs.go:123] Gathering logs for kube-apiserver [e4f90e1b9024] ...
	I1204 15:36:11.711344    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4f90e1b9024"
	I1204 15:36:11.736109    9926 logs.go:123] Gathering logs for kube-proxy [5620f978e468] ...
	I1204 15:36:11.736122    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5620f978e468"
	I1204 15:36:11.756799    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:36:11.756810    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:36:14.293266    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:36:19.296222    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:36:19.296486    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:36:19.333091    9926 logs.go:282] 2 containers: [9bb28cca02e8 e4f90e1b9024]
	I1204 15:36:19.333226    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:36:19.355560    9926 logs.go:282] 2 containers: [a32a7806b496 ca0c907ad43c]
	I1204 15:36:19.355683    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:36:19.370793    9926 logs.go:282] 1 containers: [08e34a589c88]
	I1204 15:36:19.370864    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:36:19.383006    9926 logs.go:282] 2 containers: [7fbf81020cc8 f18e443b8788]
	I1204 15:36:19.383084    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:36:19.394254    9926 logs.go:282] 1 containers: [5620f978e468]
	I1204 15:36:19.394321    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:36:19.405247    9926 logs.go:282] 2 containers: [d8b20ca47793 e94aa66fc745]
	I1204 15:36:19.405313    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:36:19.415226    9926 logs.go:282] 0 containers: []
	W1204 15:36:19.415237    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:36:19.415303    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:36:19.425441    9926 logs.go:282] 0 containers: []
	W1204 15:36:19.425451    9926 logs.go:284] No container was found matching "storage-provisioner"
	I1204 15:36:19.425458    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:36:19.425464    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:36:19.437042    9926 logs.go:123] Gathering logs for kube-controller-manager [d8b20ca47793] ...
	I1204 15:36:19.437055    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8b20ca47793"
	I1204 15:36:19.458370    9926 logs.go:123] Gathering logs for kube-apiserver [e4f90e1b9024] ...
	I1204 15:36:19.458381    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4f90e1b9024"
	I1204 15:36:19.477459    9926 logs.go:123] Gathering logs for etcd [a32a7806b496] ...
	I1204 15:36:19.477471    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32a7806b496"
	I1204 15:36:19.491298    9926 logs.go:123] Gathering logs for kube-proxy [5620f978e468] ...
	I1204 15:36:19.491311    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5620f978e468"
	I1204 15:36:19.502551    9926 logs.go:123] Gathering logs for kube-controller-manager [e94aa66fc745] ...
	I1204 15:36:19.502562    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94aa66fc745"
	I1204 15:36:19.515937    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:36:19.515950    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:36:19.541985    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:36:19.541995    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:36:19.578856    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:36:19.578864    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:36:19.613452    9926 logs.go:123] Gathering logs for kube-scheduler [7fbf81020cc8] ...
	I1204 15:36:19.613465    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fbf81020cc8"
	I1204 15:36:19.630013    9926 logs.go:123] Gathering logs for coredns [08e34a589c88] ...
	I1204 15:36:19.630038    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e34a589c88"
	I1204 15:36:19.641047    9926 logs.go:123] Gathering logs for kube-scheduler [f18e443b8788] ...
	I1204 15:36:19.641059    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f18e443b8788"
	I1204 15:36:19.663473    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:36:19.663486    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:36:19.667762    9926 logs.go:123] Gathering logs for kube-apiserver [9bb28cca02e8] ...
	I1204 15:36:19.667771    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb28cca02e8"
	I1204 15:36:19.681714    9926 logs.go:123] Gathering logs for etcd [ca0c907ad43c] ...
	I1204 15:36:19.681726    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca0c907ad43c"
	I1204 15:36:22.200737    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:36:27.203435    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:36:27.203911    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:36:27.242974    9926 logs.go:282] 2 containers: [9bb28cca02e8 e4f90e1b9024]
	I1204 15:36:27.243128    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:36:27.264128    9926 logs.go:282] 2 containers: [a32a7806b496 ca0c907ad43c]
	I1204 15:36:27.264261    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:36:27.293435    9926 logs.go:282] 1 containers: [08e34a589c88]
	I1204 15:36:27.293518    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:36:27.305081    9926 logs.go:282] 2 containers: [7fbf81020cc8 f18e443b8788]
	I1204 15:36:27.305166    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:36:27.315699    9926 logs.go:282] 1 containers: [5620f978e468]
	I1204 15:36:27.315782    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:36:27.326214    9926 logs.go:282] 2 containers: [d8b20ca47793 e94aa66fc745]
	I1204 15:36:27.326295    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:36:27.336654    9926 logs.go:282] 0 containers: []
	W1204 15:36:27.336665    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:36:27.336732    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:36:27.346636    9926 logs.go:282] 0 containers: []
	W1204 15:36:27.346648    9926 logs.go:284] No container was found matching "storage-provisioner"
	I1204 15:36:27.346656    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:36:27.346662    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:36:27.383656    9926 logs.go:123] Gathering logs for kube-apiserver [e4f90e1b9024] ...
	I1204 15:36:27.383668    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4f90e1b9024"
	I1204 15:36:27.403172    9926 logs.go:123] Gathering logs for coredns [08e34a589c88] ...
	I1204 15:36:27.403183    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e34a589c88"
	I1204 15:36:27.414735    9926 logs.go:123] Gathering logs for kube-scheduler [f18e443b8788] ...
	I1204 15:36:27.414746    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f18e443b8788"
	I1204 15:36:27.430221    9926 logs.go:123] Gathering logs for kube-controller-manager [d8b20ca47793] ...
	I1204 15:36:27.430232    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8b20ca47793"
	I1204 15:36:27.447254    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:36:27.447265    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:36:27.459637    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:36:27.459651    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:36:27.464158    9926 logs.go:123] Gathering logs for kube-apiserver [9bb28cca02e8] ...
	I1204 15:36:27.464164    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb28cca02e8"
	I1204 15:36:27.478742    9926 logs.go:123] Gathering logs for etcd [a32a7806b496] ...
	I1204 15:36:27.478752    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32a7806b496"
	I1204 15:36:27.493245    9926 logs.go:123] Gathering logs for etcd [ca0c907ad43c] ...
	I1204 15:36:27.493254    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca0c907ad43c"
	I1204 15:36:27.510902    9926 logs.go:123] Gathering logs for kube-controller-manager [e94aa66fc745] ...
	I1204 15:36:27.510913    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94aa66fc745"
	I1204 15:36:27.524508    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:36:27.524520    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:36:27.549632    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:36:27.549642    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:36:27.586319    9926 logs.go:123] Gathering logs for kube-scheduler [7fbf81020cc8] ...
	I1204 15:36:27.586326    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fbf81020cc8"
	I1204 15:36:27.603000    9926 logs.go:123] Gathering logs for kube-proxy [5620f978e468] ...
	I1204 15:36:27.603012    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5620f978e468"
	I1204 15:36:30.117235    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:36:35.120226    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:36:35.120767    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:36:35.161515    9926 logs.go:282] 2 containers: [9bb28cca02e8 e4f90e1b9024]
	I1204 15:36:35.161658    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:36:35.183578    9926 logs.go:282] 2 containers: [a32a7806b496 ca0c907ad43c]
	I1204 15:36:35.183706    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:36:35.199324    9926 logs.go:282] 1 containers: [08e34a589c88]
	I1204 15:36:35.199405    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:36:35.212070    9926 logs.go:282] 2 containers: [7fbf81020cc8 f18e443b8788]
	I1204 15:36:35.212172    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:36:35.223108    9926 logs.go:282] 1 containers: [5620f978e468]
	I1204 15:36:35.223183    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:36:35.233729    9926 logs.go:282] 2 containers: [d8b20ca47793 e94aa66fc745]
	I1204 15:36:35.233805    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:36:35.244132    9926 logs.go:282] 0 containers: []
	W1204 15:36:35.244146    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:36:35.244215    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:36:35.254551    9926 logs.go:282] 0 containers: []
	W1204 15:36:35.254564    9926 logs.go:284] No container was found matching "storage-provisioner"
	I1204 15:36:35.254573    9926 logs.go:123] Gathering logs for kube-controller-manager [e94aa66fc745] ...
	I1204 15:36:35.254579    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94aa66fc745"
	I1204 15:36:35.266435    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:36:35.266446    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:36:35.292563    9926 logs.go:123] Gathering logs for kube-apiserver [e4f90e1b9024] ...
	I1204 15:36:35.292574    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4f90e1b9024"
	I1204 15:36:35.311203    9926 logs.go:123] Gathering logs for etcd [ca0c907ad43c] ...
	I1204 15:36:35.311213    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca0c907ad43c"
	I1204 15:36:35.332100    9926 logs.go:123] Gathering logs for kube-scheduler [f18e443b8788] ...
	I1204 15:36:35.332112    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f18e443b8788"
	I1204 15:36:35.350985    9926 logs.go:123] Gathering logs for kube-proxy [5620f978e468] ...
	I1204 15:36:35.350997    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5620f978e468"
	I1204 15:36:35.368577    9926 logs.go:123] Gathering logs for kube-controller-manager [d8b20ca47793] ...
	I1204 15:36:35.368588    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8b20ca47793"
	I1204 15:36:35.386129    9926 logs.go:123] Gathering logs for kube-apiserver [9bb28cca02e8] ...
	I1204 15:36:35.386142    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb28cca02e8"
	I1204 15:36:35.400922    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:36:35.400932    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:36:35.438777    9926 logs.go:123] Gathering logs for coredns [08e34a589c88] ...
	I1204 15:36:35.438787    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e34a589c88"
	I1204 15:36:35.450017    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:36:35.450028    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:36:35.454816    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:36:35.454826    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:36:35.489250    9926 logs.go:123] Gathering logs for etcd [a32a7806b496] ...
	I1204 15:36:35.489261    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32a7806b496"
	I1204 15:36:35.503483    9926 logs.go:123] Gathering logs for kube-scheduler [7fbf81020cc8] ...
	I1204 15:36:35.503496    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fbf81020cc8"
	I1204 15:36:35.519490    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:36:35.519500    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:36:38.033165    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:36:43.035749    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:36:43.036592    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:36:43.079131    9926 logs.go:282] 2 containers: [9bb28cca02e8 e4f90e1b9024]
	I1204 15:36:43.079273    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:36:43.101748    9926 logs.go:282] 2 containers: [a32a7806b496 ca0c907ad43c]
	I1204 15:36:43.101884    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:36:43.117394    9926 logs.go:282] 1 containers: [08e34a589c88]
	I1204 15:36:43.117477    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:36:43.129679    9926 logs.go:282] 2 containers: [7fbf81020cc8 f18e443b8788]
	I1204 15:36:43.129757    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:36:43.140497    9926 logs.go:282] 1 containers: [5620f978e468]
	I1204 15:36:43.140576    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:36:43.155806    9926 logs.go:282] 2 containers: [d8b20ca47793 e94aa66fc745]
	I1204 15:36:43.155888    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:36:43.168231    9926 logs.go:282] 0 containers: []
	W1204 15:36:43.168244    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:36:43.168314    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:36:43.178523    9926 logs.go:282] 0 containers: []
	W1204 15:36:43.178534    9926 logs.go:284] No container was found matching "storage-provisioner"
	I1204 15:36:43.178542    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:36:43.178547    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:36:43.214005    9926 logs.go:123] Gathering logs for etcd [ca0c907ad43c] ...
	I1204 15:36:43.214012    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca0c907ad43c"
	I1204 15:36:43.235267    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:36:43.235281    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:36:43.239859    9926 logs.go:123] Gathering logs for coredns [08e34a589c88] ...
	I1204 15:36:43.239868    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e34a589c88"
	I1204 15:36:43.251951    9926 logs.go:123] Gathering logs for kube-controller-manager [e94aa66fc745] ...
	I1204 15:36:43.251963    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94aa66fc745"
	I1204 15:36:43.263562    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:36:43.263573    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:36:43.275242    9926 logs.go:123] Gathering logs for kube-scheduler [f18e443b8788] ...
	I1204 15:36:43.275255    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f18e443b8788"
	I1204 15:36:43.295747    9926 logs.go:123] Gathering logs for kube-proxy [5620f978e468] ...
	I1204 15:36:43.295758    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5620f978e468"
	I1204 15:36:43.307834    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:36:43.307848    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:36:43.342167    9926 logs.go:123] Gathering logs for kube-apiserver [9bb28cca02e8] ...
	I1204 15:36:43.342181    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb28cca02e8"
	I1204 15:36:43.356553    9926 logs.go:123] Gathering logs for kube-apiserver [e4f90e1b9024] ...
	I1204 15:36:43.356567    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4f90e1b9024"
	I1204 15:36:43.377631    9926 logs.go:123] Gathering logs for etcd [a32a7806b496] ...
	I1204 15:36:43.377640    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32a7806b496"
	I1204 15:36:43.391903    9926 logs.go:123] Gathering logs for kube-scheduler [7fbf81020cc8] ...
	I1204 15:36:43.391912    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fbf81020cc8"
	I1204 15:36:43.408540    9926 logs.go:123] Gathering logs for kube-controller-manager [d8b20ca47793] ...
	I1204 15:36:43.408555    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8b20ca47793"
	I1204 15:36:43.427401    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:36:43.427411    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:36:45.954326    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:36:50.957095    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:36:50.957316    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:36:50.989923    9926 logs.go:282] 2 containers: [9bb28cca02e8 e4f90e1b9024]
	I1204 15:36:50.990052    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:36:51.011788    9926 logs.go:282] 2 containers: [a32a7806b496 ca0c907ad43c]
	I1204 15:36:51.011909    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:36:51.027795    9926 logs.go:282] 1 containers: [08e34a589c88]
	I1204 15:36:51.027884    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:36:51.046403    9926 logs.go:282] 2 containers: [7fbf81020cc8 f18e443b8788]
	I1204 15:36:51.046484    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:36:51.059488    9926 logs.go:282] 1 containers: [5620f978e468]
	I1204 15:36:51.059558    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:36:51.076940    9926 logs.go:282] 2 containers: [d8b20ca47793 e94aa66fc745]
	I1204 15:36:51.077021    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:36:51.087124    9926 logs.go:282] 0 containers: []
	W1204 15:36:51.087136    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:36:51.087195    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:36:51.097597    9926 logs.go:282] 0 containers: []
	W1204 15:36:51.097609    9926 logs.go:284] No container was found matching "storage-provisioner"
	I1204 15:36:51.097619    9926 logs.go:123] Gathering logs for coredns [08e34a589c88] ...
	I1204 15:36:51.097625    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e34a589c88"
	I1204 15:36:51.108687    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:36:51.108699    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:36:51.143951    9926 logs.go:123] Gathering logs for kube-apiserver [9bb28cca02e8] ...
	I1204 15:36:51.143962    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb28cca02e8"
	I1204 15:36:51.158107    9926 logs.go:123] Gathering logs for etcd [ca0c907ad43c] ...
	I1204 15:36:51.158118    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca0c907ad43c"
	I1204 15:36:51.176241    9926 logs.go:123] Gathering logs for kube-proxy [5620f978e468] ...
	I1204 15:36:51.176261    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5620f978e468"
	I1204 15:36:51.191284    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:36:51.191299    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:36:51.230897    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:36:51.230936    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:36:51.236364    9926 logs.go:123] Gathering logs for kube-scheduler [7fbf81020cc8] ...
	I1204 15:36:51.236378    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fbf81020cc8"
	I1204 15:36:51.254762    9926 logs.go:123] Gathering logs for kube-scheduler [f18e443b8788] ...
	I1204 15:36:51.254783    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f18e443b8788"
	I1204 15:36:51.272520    9926 logs.go:123] Gathering logs for kube-apiserver [e4f90e1b9024] ...
	I1204 15:36:51.272535    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4f90e1b9024"
	I1204 15:36:51.294579    9926 logs.go:123] Gathering logs for etcd [a32a7806b496] ...
	I1204 15:36:51.294605    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32a7806b496"
	I1204 15:36:51.310974    9926 logs.go:123] Gathering logs for kube-controller-manager [d8b20ca47793] ...
	I1204 15:36:51.310985    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8b20ca47793"
	I1204 15:36:51.329685    9926 logs.go:123] Gathering logs for kube-controller-manager [e94aa66fc745] ...
	I1204 15:36:51.329696    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94aa66fc745"
	I1204 15:36:51.341178    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:36:51.341188    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:36:51.367482    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:36:51.367490    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:36:53.882730    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:36:58.885129    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:36:58.885355    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:36:58.898027    9926 logs.go:282] 2 containers: [9bb28cca02e8 e4f90e1b9024]
	I1204 15:36:58.898107    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:36:58.909557    9926 logs.go:282] 2 containers: [a32a7806b496 ca0c907ad43c]
	I1204 15:36:58.909641    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:36:58.920557    9926 logs.go:282] 1 containers: [08e34a589c88]
	I1204 15:36:58.920637    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:36:58.931386    9926 logs.go:282] 2 containers: [7fbf81020cc8 f18e443b8788]
	I1204 15:36:58.931464    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:36:58.941835    9926 logs.go:282] 1 containers: [5620f978e468]
	I1204 15:36:58.941915    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:36:58.952576    9926 logs.go:282] 2 containers: [d8b20ca47793 e94aa66fc745]
	I1204 15:36:58.952654    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:36:58.962803    9926 logs.go:282] 0 containers: []
	W1204 15:36:58.962814    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:36:58.962876    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:36:58.973176    9926 logs.go:282] 0 containers: []
	W1204 15:36:58.973190    9926 logs.go:284] No container was found matching "storage-provisioner"
	I1204 15:36:58.973198    9926 logs.go:123] Gathering logs for kube-scheduler [f18e443b8788] ...
	I1204 15:36:58.973203    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f18e443b8788"
	I1204 15:36:58.988795    9926 logs.go:123] Gathering logs for kube-controller-manager [e94aa66fc745] ...
	I1204 15:36:58.988805    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94aa66fc745"
	I1204 15:36:59.000290    9926 logs.go:123] Gathering logs for kube-scheduler [7fbf81020cc8] ...
	I1204 15:36:59.000305    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fbf81020cc8"
	I1204 15:36:59.017149    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:36:59.017159    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:36:59.054979    9926 logs.go:123] Gathering logs for etcd [a32a7806b496] ...
	I1204 15:36:59.054990    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32a7806b496"
	I1204 15:36:59.069816    9926 logs.go:123] Gathering logs for coredns [08e34a589c88] ...
	I1204 15:36:59.069827    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e34a589c88"
	I1204 15:36:59.081153    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:36:59.081164    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:36:59.117318    9926 logs.go:123] Gathering logs for kube-proxy [5620f978e468] ...
	I1204 15:36:59.117326    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5620f978e468"
	I1204 15:36:59.128967    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:36:59.128976    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:36:59.153097    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:36:59.153104    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:36:59.164357    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:36:59.164367    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:36:59.168736    9926 logs.go:123] Gathering logs for kube-apiserver [e4f90e1b9024] ...
	I1204 15:36:59.168744    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4f90e1b9024"
	I1204 15:36:59.188562    9926 logs.go:123] Gathering logs for etcd [ca0c907ad43c] ...
	I1204 15:36:59.188572    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca0c907ad43c"
	I1204 15:36:59.210565    9926 logs.go:123] Gathering logs for kube-controller-manager [d8b20ca47793] ...
	I1204 15:36:59.210576    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8b20ca47793"
	I1204 15:36:59.228544    9926 logs.go:123] Gathering logs for kube-apiserver [9bb28cca02e8] ...
	I1204 15:36:59.228555    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb28cca02e8"
	I1204 15:37:01.748317    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:37:06.750589    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:37:06.750715    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:37:06.762815    9926 logs.go:282] 2 containers: [9bb28cca02e8 e4f90e1b9024]
	I1204 15:37:06.762906    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:37:06.774935    9926 logs.go:282] 2 containers: [a32a7806b496 ca0c907ad43c]
	I1204 15:37:06.775020    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:37:06.786607    9926 logs.go:282] 1 containers: [08e34a589c88]
	I1204 15:37:06.786696    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:37:06.798810    9926 logs.go:282] 2 containers: [7fbf81020cc8 f18e443b8788]
	I1204 15:37:06.798890    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:37:06.810576    9926 logs.go:282] 1 containers: [5620f978e468]
	I1204 15:37:06.810656    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:37:06.821192    9926 logs.go:282] 2 containers: [d8b20ca47793 e94aa66fc745]
	I1204 15:37:06.821269    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:37:06.831318    9926 logs.go:282] 0 containers: []
	W1204 15:37:06.831330    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:37:06.831404    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:37:06.842027    9926 logs.go:282] 0 containers: []
	W1204 15:37:06.842038    9926 logs.go:284] No container was found matching "storage-provisioner"
	I1204 15:37:06.842048    9926 logs.go:123] Gathering logs for etcd [a32a7806b496] ...
	I1204 15:37:06.842054    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32a7806b496"
	I1204 15:37:06.856025    9926 logs.go:123] Gathering logs for coredns [08e34a589c88] ...
	I1204 15:37:06.856037    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e34a589c88"
	I1204 15:37:06.867113    9926 logs.go:123] Gathering logs for kube-proxy [5620f978e468] ...
	I1204 15:37:06.867124    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5620f978e468"
	I1204 15:37:06.878747    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:37:06.878760    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:37:06.883605    9926 logs.go:123] Gathering logs for etcd [ca0c907ad43c] ...
	I1204 15:37:06.883614    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca0c907ad43c"
	I1204 15:37:06.901147    9926 logs.go:123] Gathering logs for kube-controller-manager [d8b20ca47793] ...
	I1204 15:37:06.901160    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8b20ca47793"
	I1204 15:37:06.918664    9926 logs.go:123] Gathering logs for kube-controller-manager [e94aa66fc745] ...
	I1204 15:37:06.918675    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94aa66fc745"
	I1204 15:37:06.930409    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:37:06.930423    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:37:06.954455    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:37:06.954465    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:37:06.989138    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:37:06.989147    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:37:07.024503    9926 logs.go:123] Gathering logs for kube-apiserver [9bb28cca02e8] ...
	I1204 15:37:07.024513    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb28cca02e8"
	I1204 15:37:07.040118    9926 logs.go:123] Gathering logs for kube-apiserver [e4f90e1b9024] ...
	I1204 15:37:07.040131    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4f90e1b9024"
	I1204 15:37:07.058264    9926 logs.go:123] Gathering logs for kube-scheduler [7fbf81020cc8] ...
	I1204 15:37:07.058275    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fbf81020cc8"
	I1204 15:37:07.075902    9926 logs.go:123] Gathering logs for kube-scheduler [f18e443b8788] ...
	I1204 15:37:07.075914    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f18e443b8788"
	I1204 15:37:07.096786    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:37:07.096798    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:37:09.611232    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:37:14.612712    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:37:14.612892    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:37:14.625344    9926 logs.go:282] 2 containers: [9bb28cca02e8 e4f90e1b9024]
	I1204 15:37:14.625424    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:37:14.635680    9926 logs.go:282] 2 containers: [a32a7806b496 ca0c907ad43c]
	I1204 15:37:14.635760    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:37:14.650826    9926 logs.go:282] 1 containers: [08e34a589c88]
	I1204 15:37:14.650912    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:37:14.661111    9926 logs.go:282] 2 containers: [7fbf81020cc8 f18e443b8788]
	I1204 15:37:14.661190    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:37:14.671525    9926 logs.go:282] 1 containers: [5620f978e468]
	I1204 15:37:14.671601    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:37:14.682654    9926 logs.go:282] 2 containers: [d8b20ca47793 e94aa66fc745]
	I1204 15:37:14.682731    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:37:14.693158    9926 logs.go:282] 0 containers: []
	W1204 15:37:14.693169    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:37:14.693236    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:37:14.703584    9926 logs.go:282] 0 containers: []
	W1204 15:37:14.703594    9926 logs.go:284] No container was found matching "storage-provisioner"
	I1204 15:37:14.703602    9926 logs.go:123] Gathering logs for kube-scheduler [f18e443b8788] ...
	I1204 15:37:14.703607    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f18e443b8788"
	I1204 15:37:14.721985    9926 logs.go:123] Gathering logs for kube-proxy [5620f978e468] ...
	I1204 15:37:14.721997    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5620f978e468"
	I1204 15:37:14.733726    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:37:14.733739    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:37:14.759814    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:37:14.759824    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:37:14.771106    9926 logs.go:123] Gathering logs for kube-apiserver [9bb28cca02e8] ...
	I1204 15:37:14.771116    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb28cca02e8"
	I1204 15:37:14.785028    9926 logs.go:123] Gathering logs for etcd [ca0c907ad43c] ...
	I1204 15:37:14.785041    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca0c907ad43c"
	I1204 15:37:14.802776    9926 logs.go:123] Gathering logs for coredns [08e34a589c88] ...
	I1204 15:37:14.802789    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e34a589c88"
	I1204 15:37:14.816737    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:37:14.816750    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:37:14.853871    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:37:14.853880    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:37:14.889590    9926 logs.go:123] Gathering logs for kube-apiserver [e4f90e1b9024] ...
	I1204 15:37:14.889600    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4f90e1b9024"
	I1204 15:37:14.908469    9926 logs.go:123] Gathering logs for kube-controller-manager [d8b20ca47793] ...
	I1204 15:37:14.908480    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8b20ca47793"
	I1204 15:37:14.925351    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:37:14.925364    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:37:14.929743    9926 logs.go:123] Gathering logs for kube-scheduler [7fbf81020cc8] ...
	I1204 15:37:14.929749    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fbf81020cc8"
	I1204 15:37:14.950508    9926 logs.go:123] Gathering logs for kube-controller-manager [e94aa66fc745] ...
	I1204 15:37:14.950519    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94aa66fc745"
	I1204 15:37:14.961991    9926 logs.go:123] Gathering logs for etcd [a32a7806b496] ...
	I1204 15:37:14.962004    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32a7806b496"
	I1204 15:37:17.479771    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:37:22.482087    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:37:22.482343    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:37:22.503567    9926 logs.go:282] 2 containers: [9bb28cca02e8 e4f90e1b9024]
	I1204 15:37:22.503689    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:37:22.518736    9926 logs.go:282] 2 containers: [a32a7806b496 ca0c907ad43c]
	I1204 15:37:22.518826    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:37:22.531731    9926 logs.go:282] 1 containers: [08e34a589c88]
	I1204 15:37:22.531816    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:37:22.543260    9926 logs.go:282] 2 containers: [7fbf81020cc8 f18e443b8788]
	I1204 15:37:22.543339    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:37:22.554083    9926 logs.go:282] 1 containers: [5620f978e468]
	I1204 15:37:22.554163    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:37:22.569846    9926 logs.go:282] 2 containers: [d8b20ca47793 e94aa66fc745]
	I1204 15:37:22.569915    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:37:22.579875    9926 logs.go:282] 0 containers: []
	W1204 15:37:22.579888    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:37:22.579953    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:37:22.589473    9926 logs.go:282] 0 containers: []
	W1204 15:37:22.589488    9926 logs.go:284] No container was found matching "storage-provisioner"
	I1204 15:37:22.589496    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:37:22.589501    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:37:22.625415    9926 logs.go:123] Gathering logs for coredns [08e34a589c88] ...
	I1204 15:37:22.625427    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e34a589c88"
	I1204 15:37:22.636570    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:37:22.636586    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:37:22.640806    9926 logs.go:123] Gathering logs for etcd [a32a7806b496] ...
	I1204 15:37:22.640816    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32a7806b496"
	I1204 15:37:22.667186    9926 logs.go:123] Gathering logs for kube-controller-manager [e94aa66fc745] ...
	I1204 15:37:22.667198    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94aa66fc745"
	I1204 15:37:22.679102    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:37:22.679116    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:37:22.717299    9926 logs.go:123] Gathering logs for etcd [ca0c907ad43c] ...
	I1204 15:37:22.717322    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca0c907ad43c"
	I1204 15:37:22.734462    9926 logs.go:123] Gathering logs for kube-scheduler [f18e443b8788] ...
	I1204 15:37:22.734475    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f18e443b8788"
	I1204 15:37:22.756833    9926 logs.go:123] Gathering logs for kube-proxy [5620f978e468] ...
	I1204 15:37:22.756847    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5620f978e468"
	I1204 15:37:22.769466    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:37:22.769481    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:37:22.781876    9926 logs.go:123] Gathering logs for kube-apiserver [9bb28cca02e8] ...
	I1204 15:37:22.781889    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb28cca02e8"
	I1204 15:37:22.796567    9926 logs.go:123] Gathering logs for kube-apiserver [e4f90e1b9024] ...
	I1204 15:37:22.796578    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4f90e1b9024"
	I1204 15:37:22.818011    9926 logs.go:123] Gathering logs for kube-scheduler [7fbf81020cc8] ...
	I1204 15:37:22.818025    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fbf81020cc8"
	I1204 15:37:22.834437    9926 logs.go:123] Gathering logs for kube-controller-manager [d8b20ca47793] ...
	I1204 15:37:22.834452    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8b20ca47793"
	I1204 15:37:22.853126    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:37:22.853136    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:37:25.380900    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:37:30.383291    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:37:30.383845    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:37:30.424553    9926 logs.go:282] 2 containers: [9bb28cca02e8 e4f90e1b9024]
	I1204 15:37:30.424708    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:37:30.446386    9926 logs.go:282] 2 containers: [a32a7806b496 ca0c907ad43c]
	I1204 15:37:30.446515    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:37:30.461656    9926 logs.go:282] 1 containers: [08e34a589c88]
	I1204 15:37:30.461744    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:37:30.474312    9926 logs.go:282] 2 containers: [7fbf81020cc8 f18e443b8788]
	I1204 15:37:30.474396    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:37:30.485702    9926 logs.go:282] 1 containers: [5620f978e468]
	I1204 15:37:30.485786    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:37:30.501008    9926 logs.go:282] 2 containers: [d8b20ca47793 e94aa66fc745]
	I1204 15:37:30.501092    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:37:30.514838    9926 logs.go:282] 0 containers: []
	W1204 15:37:30.514852    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:37:30.514920    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:37:30.525018    9926 logs.go:282] 0 containers: []
	W1204 15:37:30.525027    9926 logs.go:284] No container was found matching "storage-provisioner"
	I1204 15:37:30.525035    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:37:30.525055    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:37:30.529388    9926 logs.go:123] Gathering logs for kube-apiserver [9bb28cca02e8] ...
	I1204 15:37:30.529394    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb28cca02e8"
	I1204 15:37:30.547034    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:37:30.547045    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:37:30.584527    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:37:30.584539    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:37:30.619548    9926 logs.go:123] Gathering logs for etcd [a32a7806b496] ...
	I1204 15:37:30.619560    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32a7806b496"
	I1204 15:37:30.633509    9926 logs.go:123] Gathering logs for etcd [ca0c907ad43c] ...
	I1204 15:37:30.633523    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca0c907ad43c"
	I1204 15:37:30.651073    9926 logs.go:123] Gathering logs for coredns [08e34a589c88] ...
	I1204 15:37:30.651086    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e34a589c88"
	I1204 15:37:30.662721    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:37:30.662732    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:37:30.687636    9926 logs.go:123] Gathering logs for kube-proxy [5620f978e468] ...
	I1204 15:37:30.687646    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5620f978e468"
	I1204 15:37:30.699760    9926 logs.go:123] Gathering logs for kube-controller-manager [d8b20ca47793] ...
	I1204 15:37:30.699772    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8b20ca47793"
	I1204 15:37:30.717378    9926 logs.go:123] Gathering logs for kube-apiserver [e4f90e1b9024] ...
	I1204 15:37:30.717388    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4f90e1b9024"
	I1204 15:37:30.736644    9926 logs.go:123] Gathering logs for kube-scheduler [7fbf81020cc8] ...
	I1204 15:37:30.736655    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fbf81020cc8"
	I1204 15:37:30.757652    9926 logs.go:123] Gathering logs for kube-scheduler [f18e443b8788] ...
	I1204 15:37:30.757666    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f18e443b8788"
	I1204 15:37:30.773215    9926 logs.go:123] Gathering logs for kube-controller-manager [e94aa66fc745] ...
	I1204 15:37:30.773225    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94aa66fc745"
	I1204 15:37:30.789484    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:37:30.789493    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:37:33.304721    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:37:38.307304    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:37:38.307581    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:37:38.329002    9926 logs.go:282] 2 containers: [9bb28cca02e8 e4f90e1b9024]
	I1204 15:37:38.329121    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:37:38.346014    9926 logs.go:282] 2 containers: [a32a7806b496 ca0c907ad43c]
	I1204 15:37:38.346101    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:37:38.360685    9926 logs.go:282] 1 containers: [08e34a589c88]
	I1204 15:37:38.360772    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:37:38.371177    9926 logs.go:282] 2 containers: [7fbf81020cc8 f18e443b8788]
	I1204 15:37:38.371251    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:37:38.381918    9926 logs.go:282] 1 containers: [5620f978e468]
	I1204 15:37:38.382003    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:37:38.393741    9926 logs.go:282] 2 containers: [d8b20ca47793 e94aa66fc745]
	I1204 15:37:38.393824    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:37:38.411894    9926 logs.go:282] 0 containers: []
	W1204 15:37:38.411908    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:37:38.411972    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:37:38.422085    9926 logs.go:282] 0 containers: []
	W1204 15:37:38.422098    9926 logs.go:284] No container was found matching "storage-provisioner"
	I1204 15:37:38.422106    9926 logs.go:123] Gathering logs for kube-scheduler [f18e443b8788] ...
	I1204 15:37:38.422112    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f18e443b8788"
	I1204 15:37:38.438044    9926 logs.go:123] Gathering logs for kube-apiserver [9bb28cca02e8] ...
	I1204 15:37:38.438054    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb28cca02e8"
	I1204 15:37:38.457067    9926 logs.go:123] Gathering logs for etcd [ca0c907ad43c] ...
	I1204 15:37:38.457081    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca0c907ad43c"
	I1204 15:37:38.475103    9926 logs.go:123] Gathering logs for kube-scheduler [7fbf81020cc8] ...
	I1204 15:37:38.475115    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fbf81020cc8"
	I1204 15:37:38.493773    9926 logs.go:123] Gathering logs for kube-controller-manager [d8b20ca47793] ...
	I1204 15:37:38.493786    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8b20ca47793"
	I1204 15:37:38.510889    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:37:38.510901    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:37:38.522516    9926 logs.go:123] Gathering logs for etcd [a32a7806b496] ...
	I1204 15:37:38.522527    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32a7806b496"
	I1204 15:37:38.537058    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:37:38.537068    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:37:38.570339    9926 logs.go:123] Gathering logs for kube-apiserver [e4f90e1b9024] ...
	I1204 15:37:38.570354    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4f90e1b9024"
	I1204 15:37:38.589143    9926 logs.go:123] Gathering logs for coredns [08e34a589c88] ...
	I1204 15:37:38.589154    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e34a589c88"
	I1204 15:37:38.599919    9926 logs.go:123] Gathering logs for kube-proxy [5620f978e468] ...
	I1204 15:37:38.599931    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5620f978e468"
	I1204 15:37:38.611638    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:37:38.611651    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:37:38.615944    9926 logs.go:123] Gathering logs for kube-controller-manager [e94aa66fc745] ...
	I1204 15:37:38.615954    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94aa66fc745"
	I1204 15:37:38.627184    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:37:38.627197    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:37:38.652253    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:37:38.652262    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:37:41.191015    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:37:46.193553    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:37:46.194149    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:37:46.241037    9926 logs.go:282] 2 containers: [9bb28cca02e8 e4f90e1b9024]
	I1204 15:37:46.241248    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:37:46.260346    9926 logs.go:282] 2 containers: [a32a7806b496 ca0c907ad43c]
	I1204 15:37:46.260450    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:37:46.274343    9926 logs.go:282] 1 containers: [08e34a589c88]
	I1204 15:37:46.274440    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:37:46.286789    9926 logs.go:282] 2 containers: [7fbf81020cc8 f18e443b8788]
	I1204 15:37:46.286866    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:37:46.302165    9926 logs.go:282] 1 containers: [5620f978e468]
	I1204 15:37:46.302244    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:37:46.313316    9926 logs.go:282] 2 containers: [d8b20ca47793 e94aa66fc745]
	I1204 15:37:46.313394    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:37:46.323858    9926 logs.go:282] 0 containers: []
	W1204 15:37:46.323870    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:37:46.323944    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:37:46.334465    9926 logs.go:282] 0 containers: []
	W1204 15:37:46.334476    9926 logs.go:284] No container was found matching "storage-provisioner"
	I1204 15:37:46.334487    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:37:46.334493    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:37:46.339328    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:37:46.339338    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:37:46.377269    9926 logs.go:123] Gathering logs for kube-apiserver [9bb28cca02e8] ...
	I1204 15:37:46.377280    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb28cca02e8"
	I1204 15:37:46.395936    9926 logs.go:123] Gathering logs for etcd [ca0c907ad43c] ...
	I1204 15:37:46.395950    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca0c907ad43c"
	I1204 15:37:46.419661    9926 logs.go:123] Gathering logs for kube-controller-manager [d8b20ca47793] ...
	I1204 15:37:46.419678    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8b20ca47793"
	I1204 15:37:46.437420    9926 logs.go:123] Gathering logs for kube-apiserver [e4f90e1b9024] ...
	I1204 15:37:46.437431    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4f90e1b9024"
	I1204 15:37:46.457080    9926 logs.go:123] Gathering logs for kube-scheduler [7fbf81020cc8] ...
	I1204 15:37:46.457093    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fbf81020cc8"
	I1204 15:37:46.473121    9926 logs.go:123] Gathering logs for kube-scheduler [f18e443b8788] ...
	I1204 15:37:46.473134    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f18e443b8788"
	I1204 15:37:46.496366    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:37:46.496378    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:37:46.519649    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:37:46.519659    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:37:46.533996    9926 logs.go:123] Gathering logs for kube-proxy [5620f978e468] ...
	I1204 15:37:46.534009    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5620f978e468"
	I1204 15:37:46.545758    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:37:46.545772    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:37:46.582953    9926 logs.go:123] Gathering logs for etcd [a32a7806b496] ...
	I1204 15:37:46.582963    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32a7806b496"
	I1204 15:37:46.596804    9926 logs.go:123] Gathering logs for coredns [08e34a589c88] ...
	I1204 15:37:46.596816    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e34a589c88"
	I1204 15:37:46.607939    9926 logs.go:123] Gathering logs for kube-controller-manager [e94aa66fc745] ...
	I1204 15:37:46.607952    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94aa66fc745"
	I1204 15:37:49.124544    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:37:54.125458    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:37:54.125603    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:37:54.140032    9926 logs.go:282] 2 containers: [9bb28cca02e8 e4f90e1b9024]
	I1204 15:37:54.140124    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:37:54.151880    9926 logs.go:282] 2 containers: [a32a7806b496 ca0c907ad43c]
	I1204 15:37:54.151966    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:37:54.163054    9926 logs.go:282] 1 containers: [08e34a589c88]
	I1204 15:37:54.163136    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:37:54.176613    9926 logs.go:282] 2 containers: [7fbf81020cc8 f18e443b8788]
	I1204 15:37:54.176719    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:37:54.188193    9926 logs.go:282] 1 containers: [5620f978e468]
	I1204 15:37:54.188333    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:37:54.200012    9926 logs.go:282] 2 containers: [d8b20ca47793 e94aa66fc745]
	I1204 15:37:54.200088    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:37:54.210944    9926 logs.go:282] 0 containers: []
	W1204 15:37:54.210954    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:37:54.211025    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:37:54.221452    9926 logs.go:282] 0 containers: []
	W1204 15:37:54.221463    9926 logs.go:284] No container was found matching "storage-provisioner"
	I1204 15:37:54.221473    9926 logs.go:123] Gathering logs for kube-apiserver [e4f90e1b9024] ...
	I1204 15:37:54.221479    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4f90e1b9024"
	I1204 15:37:54.241005    9926 logs.go:123] Gathering logs for kube-scheduler [f18e443b8788] ...
	I1204 15:37:54.241016    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f18e443b8788"
	I1204 15:37:54.257078    9926 logs.go:123] Gathering logs for kube-controller-manager [d8b20ca47793] ...
	I1204 15:37:54.257090    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8b20ca47793"
	I1204 15:37:54.275524    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:37:54.275538    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:37:54.314059    9926 logs.go:123] Gathering logs for etcd [a32a7806b496] ...
	I1204 15:37:54.314074    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32a7806b496"
	I1204 15:37:54.329137    9926 logs.go:123] Gathering logs for etcd [ca0c907ad43c] ...
	I1204 15:37:54.329150    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca0c907ad43c"
	I1204 15:37:54.348198    9926 logs.go:123] Gathering logs for kube-controller-manager [e94aa66fc745] ...
	I1204 15:37:54.348211    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94aa66fc745"
	I1204 15:37:54.360228    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:37:54.360239    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:37:54.373685    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:37:54.373698    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:37:54.379139    9926 logs.go:123] Gathering logs for kube-apiserver [9bb28cca02e8] ...
	I1204 15:37:54.379151    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb28cca02e8"
	I1204 15:37:54.393517    9926 logs.go:123] Gathering logs for kube-scheduler [7fbf81020cc8] ...
	I1204 15:37:54.393532    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fbf81020cc8"
	I1204 15:37:54.410574    9926 logs.go:123] Gathering logs for kube-proxy [5620f978e468] ...
	I1204 15:37:54.410585    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5620f978e468"
	I1204 15:37:54.422859    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:37:54.422874    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:37:54.447647    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:37:54.447655    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:37:54.483378    9926 logs.go:123] Gathering logs for coredns [08e34a589c88] ...
	I1204 15:37:54.483393    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e34a589c88"
	I1204 15:37:56.997328    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:38:02.000077    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:38:02.000297    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:38:02.017650    9926 logs.go:282] 2 containers: [9bb28cca02e8 e4f90e1b9024]
	I1204 15:38:02.017765    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:38:02.032023    9926 logs.go:282] 2 containers: [a32a7806b496 ca0c907ad43c]
	I1204 15:38:02.032118    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:38:02.044320    9926 logs.go:282] 1 containers: [08e34a589c88]
	I1204 15:38:02.044398    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:38:02.056787    9926 logs.go:282] 2 containers: [7fbf81020cc8 f18e443b8788]
	I1204 15:38:02.056876    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:38:02.067087    9926 logs.go:282] 1 containers: [5620f978e468]
	I1204 15:38:02.067164    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:38:02.077548    9926 logs.go:282] 2 containers: [d8b20ca47793 e94aa66fc745]
	I1204 15:38:02.077630    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:38:02.093180    9926 logs.go:282] 0 containers: []
	W1204 15:38:02.093190    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:38:02.093268    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:38:02.107905    9926 logs.go:282] 0 containers: []
	W1204 15:38:02.107917    9926 logs.go:284] No container was found matching "storage-provisioner"
	I1204 15:38:02.107928    9926 logs.go:123] Gathering logs for etcd [a32a7806b496] ...
	I1204 15:38:02.107934    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32a7806b496"
	I1204 15:38:02.122013    9926 logs.go:123] Gathering logs for coredns [08e34a589c88] ...
	I1204 15:38:02.122025    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e34a589c88"
	I1204 15:38:02.137661    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:38:02.137674    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:38:02.178811    9926 logs.go:123] Gathering logs for kube-apiserver [9bb28cca02e8] ...
	I1204 15:38:02.178823    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb28cca02e8"
	I1204 15:38:02.196874    9926 logs.go:123] Gathering logs for kube-controller-manager [d8b20ca47793] ...
	I1204 15:38:02.196887    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8b20ca47793"
	I1204 15:38:02.214612    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:38:02.214626    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:38:02.226821    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:38:02.226831    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:38:02.252866    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:38:02.252877    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:38:02.257883    9926 logs.go:123] Gathering logs for kube-apiserver [e4f90e1b9024] ...
	I1204 15:38:02.257893    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4f90e1b9024"
	I1204 15:38:02.277772    9926 logs.go:123] Gathering logs for kube-scheduler [f18e443b8788] ...
	I1204 15:38:02.277786    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f18e443b8788"
	I1204 15:38:02.293260    9926 logs.go:123] Gathering logs for kube-proxy [5620f978e468] ...
	I1204 15:38:02.293272    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5620f978e468"
	I1204 15:38:02.305349    9926 logs.go:123] Gathering logs for kube-controller-manager [e94aa66fc745] ...
	I1204 15:38:02.305362    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94aa66fc745"
	I1204 15:38:02.316972    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:38:02.316986    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:38:02.353527    9926 logs.go:123] Gathering logs for etcd [ca0c907ad43c] ...
	I1204 15:38:02.353538    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca0c907ad43c"
	I1204 15:38:02.372029    9926 logs.go:123] Gathering logs for kube-scheduler [7fbf81020cc8] ...
	I1204 15:38:02.372038    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fbf81020cc8"
	I1204 15:38:04.890685    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:38:09.892897    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:38:09.892995    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:38:09.904983    9926 logs.go:282] 2 containers: [9bb28cca02e8 e4f90e1b9024]
	I1204 15:38:09.905071    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:38:09.917311    9926 logs.go:282] 2 containers: [a32a7806b496 ca0c907ad43c]
	I1204 15:38:09.917385    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:38:09.928895    9926 logs.go:282] 1 containers: [08e34a589c88]
	I1204 15:38:09.928981    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:38:09.944510    9926 logs.go:282] 2 containers: [7fbf81020cc8 f18e443b8788]
	I1204 15:38:09.944596    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:38:09.956160    9926 logs.go:282] 1 containers: [5620f978e468]
	I1204 15:38:09.956238    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:38:09.967318    9926 logs.go:282] 2 containers: [d8b20ca47793 e94aa66fc745]
	I1204 15:38:09.967401    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:38:09.979867    9926 logs.go:282] 0 containers: []
	W1204 15:38:09.979879    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:38:09.979950    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:38:09.991850    9926 logs.go:282] 0 containers: []
	W1204 15:38:09.991864    9926 logs.go:284] No container was found matching "storage-provisioner"
	I1204 15:38:09.991873    9926 logs.go:123] Gathering logs for coredns [08e34a589c88] ...
	I1204 15:38:09.991882    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e34a589c88"
	I1204 15:38:10.009156    9926 logs.go:123] Gathering logs for etcd [ca0c907ad43c] ...
	I1204 15:38:10.009169    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca0c907ad43c"
	I1204 15:38:10.029959    9926 logs.go:123] Gathering logs for kube-scheduler [f18e443b8788] ...
	I1204 15:38:10.029973    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f18e443b8788"
	I1204 15:38:10.047100    9926 logs.go:123] Gathering logs for kube-controller-manager [d8b20ca47793] ...
	I1204 15:38:10.047117    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8b20ca47793"
	I1204 15:38:10.069421    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:38:10.069449    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:38:10.095730    9926 logs.go:123] Gathering logs for kube-scheduler [7fbf81020cc8] ...
	I1204 15:38:10.095742    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fbf81020cc8"
	I1204 15:38:10.122778    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:38:10.122790    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:38:10.160729    9926 logs.go:123] Gathering logs for kube-apiserver [9bb28cca02e8] ...
	I1204 15:38:10.160744    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb28cca02e8"
	I1204 15:38:10.175804    9926 logs.go:123] Gathering logs for kube-apiserver [e4f90e1b9024] ...
	I1204 15:38:10.175818    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4f90e1b9024"
	I1204 15:38:10.196162    9926 logs.go:123] Gathering logs for etcd [a32a7806b496] ...
	I1204 15:38:10.196180    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32a7806b496"
	I1204 15:38:10.215182    9926 logs.go:123] Gathering logs for kube-controller-manager [e94aa66fc745] ...
	I1204 15:38:10.215195    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94aa66fc745"
	I1204 15:38:10.227770    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:38:10.227782    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:38:10.240311    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:38:10.240324    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:38:10.245024    9926 logs.go:123] Gathering logs for kube-proxy [5620f978e468] ...
	I1204 15:38:10.245032    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5620f978e468"
	I1204 15:38:10.257140    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:38:10.257154    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:38:12.798558    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:38:17.800841    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:38:17.801009    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:38:17.812441    9926 logs.go:282] 2 containers: [9bb28cca02e8 e4f90e1b9024]
	I1204 15:38:17.812524    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:38:17.823432    9926 logs.go:282] 2 containers: [a32a7806b496 ca0c907ad43c]
	I1204 15:38:17.823515    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:38:17.837025    9926 logs.go:282] 1 containers: [08e34a589c88]
	I1204 15:38:17.837102    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:38:17.847625    9926 logs.go:282] 2 containers: [7fbf81020cc8 f18e443b8788]
	I1204 15:38:17.847731    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:38:17.858797    9926 logs.go:282] 1 containers: [5620f978e468]
	I1204 15:38:17.858878    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:38:17.869216    9926 logs.go:282] 2 containers: [d8b20ca47793 e94aa66fc745]
	I1204 15:38:17.869293    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:38:17.879761    9926 logs.go:282] 0 containers: []
	W1204 15:38:17.879774    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:38:17.879837    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:38:17.889944    9926 logs.go:282] 0 containers: []
	W1204 15:38:17.889955    9926 logs.go:284] No container was found matching "storage-provisioner"
	I1204 15:38:17.889964    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:38:17.889969    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:38:17.914724    9926 logs.go:123] Gathering logs for kube-apiserver [9bb28cca02e8] ...
	I1204 15:38:17.914737    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb28cca02e8"
	I1204 15:38:17.929386    9926 logs.go:123] Gathering logs for etcd [ca0c907ad43c] ...
	I1204 15:38:17.929399    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca0c907ad43c"
	I1204 15:38:17.947113    9926 logs.go:123] Gathering logs for coredns [08e34a589c88] ...
	I1204 15:38:17.947123    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e34a589c88"
	I1204 15:38:17.959770    9926 logs.go:123] Gathering logs for kube-scheduler [7fbf81020cc8] ...
	I1204 15:38:17.959783    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fbf81020cc8"
	I1204 15:38:17.976745    9926 logs.go:123] Gathering logs for kube-scheduler [f18e443b8788] ...
	I1204 15:38:17.976758    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f18e443b8788"
	I1204 15:38:17.992529    9926 logs.go:123] Gathering logs for kube-proxy [5620f978e468] ...
	I1204 15:38:17.992540    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5620f978e468"
	I1204 15:38:18.004263    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:38:18.004276    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:38:18.016468    9926 logs.go:123] Gathering logs for kube-controller-manager [e94aa66fc745] ...
	I1204 15:38:18.016482    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94aa66fc745"
	I1204 15:38:18.028137    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:38:18.028149    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:38:18.066337    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:38:18.066353    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:38:18.102110    9926 logs.go:123] Gathering logs for kube-apiserver [e4f90e1b9024] ...
	I1204 15:38:18.102124    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4f90e1b9024"
	I1204 15:38:18.122356    9926 logs.go:123] Gathering logs for etcd [a32a7806b496] ...
	I1204 15:38:18.122367    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32a7806b496"
	I1204 15:38:18.135858    9926 logs.go:123] Gathering logs for kube-controller-manager [d8b20ca47793] ...
	I1204 15:38:18.135870    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8b20ca47793"
	I1204 15:38:18.153560    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:38:18.153572    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:38:20.659212    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:38:25.661291    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:38:25.661494    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:38:25.673666    9926 logs.go:282] 2 containers: [9bb28cca02e8 e4f90e1b9024]
	I1204 15:38:25.673755    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:38:25.684621    9926 logs.go:282] 2 containers: [a32a7806b496 ca0c907ad43c]
	I1204 15:38:25.684702    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:38:25.695249    9926 logs.go:282] 1 containers: [08e34a589c88]
	I1204 15:38:25.695329    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:38:25.706455    9926 logs.go:282] 2 containers: [7fbf81020cc8 f18e443b8788]
	I1204 15:38:25.706539    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:38:25.717360    9926 logs.go:282] 1 containers: [5620f978e468]
	I1204 15:38:25.717431    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:38:25.728251    9926 logs.go:282] 2 containers: [d8b20ca47793 e94aa66fc745]
	I1204 15:38:25.728330    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:38:25.738413    9926 logs.go:282] 0 containers: []
	W1204 15:38:25.738428    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:38:25.738489    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:38:25.756547    9926 logs.go:282] 0 containers: []
	W1204 15:38:25.756560    9926 logs.go:284] No container was found matching "storage-provisioner"
	I1204 15:38:25.756568    9926 logs.go:123] Gathering logs for etcd [a32a7806b496] ...
	I1204 15:38:25.756574    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32a7806b496"
	I1204 15:38:25.775622    9926 logs.go:123] Gathering logs for coredns [08e34a589c88] ...
	I1204 15:38:25.775633    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e34a589c88"
	I1204 15:38:25.787104    9926 logs.go:123] Gathering logs for kube-apiserver [e4f90e1b9024] ...
	I1204 15:38:25.787115    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4f90e1b9024"
	I1204 15:38:25.805791    9926 logs.go:123] Gathering logs for kube-controller-manager [e94aa66fc745] ...
	I1204 15:38:25.805802    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94aa66fc745"
	I1204 15:38:25.818494    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:38:25.818504    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:38:25.830218    9926 logs.go:123] Gathering logs for kube-scheduler [7fbf81020cc8] ...
	I1204 15:38:25.830230    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fbf81020cc8"
	I1204 15:38:25.846883    9926 logs.go:123] Gathering logs for kube-apiserver [9bb28cca02e8] ...
	I1204 15:38:25.846892    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb28cca02e8"
	I1204 15:38:25.870709    9926 logs.go:123] Gathering logs for kube-scheduler [f18e443b8788] ...
	I1204 15:38:25.870720    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f18e443b8788"
	I1204 15:38:25.886138    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:38:25.886152    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:38:25.910960    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:38:25.910971    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:38:25.915451    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:38:25.915461    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:38:25.950504    9926 logs.go:123] Gathering logs for etcd [ca0c907ad43c] ...
	I1204 15:38:25.950515    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca0c907ad43c"
	I1204 15:38:25.968781    9926 logs.go:123] Gathering logs for kube-proxy [5620f978e468] ...
	I1204 15:38:25.968794    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5620f978e468"
	I1204 15:38:25.982012    9926 logs.go:123] Gathering logs for kube-controller-manager [d8b20ca47793] ...
	I1204 15:38:25.982026    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8b20ca47793"
	I1204 15:38:25.999244    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:38:25.999264    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:38:28.541807    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:38:33.542981    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:38:33.543164    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:38:33.563389    9926 logs.go:282] 2 containers: [9bb28cca02e8 e4f90e1b9024]
	I1204 15:38:33.563463    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:38:33.575576    9926 logs.go:282] 2 containers: [a32a7806b496 ca0c907ad43c]
	I1204 15:38:33.575657    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:38:33.586171    9926 logs.go:282] 1 containers: [08e34a589c88]
	I1204 15:38:33.586245    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:38:33.596601    9926 logs.go:282] 2 containers: [7fbf81020cc8 f18e443b8788]
	I1204 15:38:33.596677    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:38:33.610316    9926 logs.go:282] 1 containers: [5620f978e468]
	I1204 15:38:33.610389    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:38:33.621059    9926 logs.go:282] 2 containers: [d8b20ca47793 e94aa66fc745]
	I1204 15:38:33.621147    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:38:33.641115    9926 logs.go:282] 0 containers: []
	W1204 15:38:33.641128    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:38:33.641206    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:38:33.651224    9926 logs.go:282] 0 containers: []
	W1204 15:38:33.651235    9926 logs.go:284] No container was found matching "storage-provisioner"
	I1204 15:38:33.651243    9926 logs.go:123] Gathering logs for kube-apiserver [e4f90e1b9024] ...
	I1204 15:38:33.651249    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4f90e1b9024"
	I1204 15:38:33.670384    9926 logs.go:123] Gathering logs for etcd [a32a7806b496] ...
	I1204 15:38:33.670394    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32a7806b496"
	I1204 15:38:33.684169    9926 logs.go:123] Gathering logs for kube-scheduler [7fbf81020cc8] ...
	I1204 15:38:33.684181    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fbf81020cc8"
	I1204 15:38:33.703134    9926 logs.go:123] Gathering logs for kube-controller-manager [d8b20ca47793] ...
	I1204 15:38:33.703147    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8b20ca47793"
	I1204 15:38:33.720889    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:38:33.720902    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:38:33.760380    9926 logs.go:123] Gathering logs for kube-proxy [5620f978e468] ...
	I1204 15:38:33.760390    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5620f978e468"
	I1204 15:38:33.772639    9926 logs.go:123] Gathering logs for kube-apiserver [9bb28cca02e8] ...
	I1204 15:38:33.772651    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb28cca02e8"
	I1204 15:38:33.786833    9926 logs.go:123] Gathering logs for coredns [08e34a589c88] ...
	I1204 15:38:33.786846    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e34a589c88"
	I1204 15:38:33.798573    9926 logs.go:123] Gathering logs for kube-scheduler [f18e443b8788] ...
	I1204 15:38:33.798585    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f18e443b8788"
	I1204 15:38:33.815247    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:38:33.815258    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:38:33.827372    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:38:33.827382    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:38:33.865115    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:38:33.865124    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:38:33.869437    9926 logs.go:123] Gathering logs for etcd [ca0c907ad43c] ...
	I1204 15:38:33.869445    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca0c907ad43c"
	I1204 15:38:33.887049    9926 logs.go:123] Gathering logs for kube-controller-manager [e94aa66fc745] ...
	I1204 15:38:33.887061    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94aa66fc745"
	I1204 15:38:33.905197    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:38:33.905211    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:38:36.430394    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:38:41.432838    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:38:41.433033    9926 kubeadm.go:597] duration metric: took 4m4.371478667s to restartPrimaryControlPlane
	W1204 15:38:41.433170    9926 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1204 15:38:41.433226    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1204 15:38:42.357969    9926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 15:38:42.363261    9926 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 15:38:42.367033    9926 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 15:38:42.369965    9926 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 15:38:42.369970    9926 kubeadm.go:157] found existing configuration files:
	
	I1204 15:38:42.369997    9926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61592 /etc/kubernetes/admin.conf
	I1204 15:38:42.372567    9926 kubeadm.go:163] "https://control-plane.minikube.internal:61592" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:61592 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 15:38:42.372611    9926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 15:38:42.375317    9926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61592 /etc/kubernetes/kubelet.conf
	I1204 15:38:42.378189    9926 kubeadm.go:163] "https://control-plane.minikube.internal:61592" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:61592 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 15:38:42.378226    9926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 15:38:42.380875    9926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61592 /etc/kubernetes/controller-manager.conf
	I1204 15:38:42.383156    9926 kubeadm.go:163] "https://control-plane.minikube.internal:61592" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:61592 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 15:38:42.383190    9926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 15:38:42.386311    9926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61592 /etc/kubernetes/scheduler.conf
	I1204 15:38:42.388968    9926 kubeadm.go:163] "https://control-plane.minikube.internal:61592" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:61592 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 15:38:42.388997    9926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 15:38:42.391475    9926 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 15:38:42.408681    9926 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1204 15:38:42.408715    9926 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 15:38:42.455281    9926 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 15:38:42.455334    9926 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 15:38:42.455382    9926 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1204 15:38:42.505833    9926 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 15:38:42.509798    9926 out.go:235]   - Generating certificates and keys ...
	I1204 15:38:42.509834    9926 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 15:38:42.509863    9926 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 15:38:42.509908    9926 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1204 15:38:42.509943    9926 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1204 15:38:42.509977    9926 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1204 15:38:42.510008    9926 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1204 15:38:42.510040    9926 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1204 15:38:42.510076    9926 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1204 15:38:42.510115    9926 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1204 15:38:42.510158    9926 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1204 15:38:42.510178    9926 kubeadm.go:310] [certs] Using the existing "sa" key
	I1204 15:38:42.510214    9926 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 15:38:42.605058    9926 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 15:38:42.694610    9926 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 15:38:42.781829    9926 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 15:38:42.848324    9926 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 15:38:42.878483    9926 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 15:38:42.880367    9926 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 15:38:42.880392    9926 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 15:38:42.944112    9926 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 15:38:42.947034    9926 out.go:235]   - Booting up control plane ...
	I1204 15:38:42.947083    9926 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 15:38:42.947130    9926 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 15:38:42.947200    9926 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 15:38:42.947246    9926 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 15:38:42.947319    9926 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1204 15:38:47.447031    9926 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.503116 seconds
	I1204 15:38:47.447128    9926 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1204 15:38:47.452125    9926 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1204 15:38:47.961886    9926 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1204 15:38:47.962146    9926 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-084000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1204 15:38:48.466472    9926 kubeadm.go:310] [bootstrap-token] Using token: pkltab.sskucs47s1362brc
	I1204 15:38:48.470663    9926 out.go:235]   - Configuring RBAC rules ...
	I1204 15:38:48.470714    9926 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1204 15:38:48.470766    9926 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1204 15:38:48.477696    9926 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1204 15:38:48.478508    9926 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1204 15:38:48.479422    9926 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1204 15:38:48.480086    9926 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1204 15:38:48.483391    9926 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1204 15:38:48.636571    9926 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1204 15:38:48.870866    9926 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1204 15:38:48.871292    9926 kubeadm.go:310] 
	I1204 15:38:48.871332    9926 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1204 15:38:48.871338    9926 kubeadm.go:310] 
	I1204 15:38:48.871380    9926 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1204 15:38:48.871384    9926 kubeadm.go:310] 
	I1204 15:38:48.871397    9926 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1204 15:38:48.871431    9926 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1204 15:38:48.871498    9926 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1204 15:38:48.871502    9926 kubeadm.go:310] 
	I1204 15:38:48.871538    9926 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1204 15:38:48.871542    9926 kubeadm.go:310] 
	I1204 15:38:48.871564    9926 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1204 15:38:48.871579    9926 kubeadm.go:310] 
	I1204 15:38:48.871610    9926 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1204 15:38:48.871648    9926 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1204 15:38:48.871704    9926 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1204 15:38:48.871712    9926 kubeadm.go:310] 
	I1204 15:38:48.871761    9926 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1204 15:38:48.871806    9926 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1204 15:38:48.871809    9926 kubeadm.go:310] 
	I1204 15:38:48.871857    9926 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token pkltab.sskucs47s1362brc \
	I1204 15:38:48.871915    9926 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9ed783fc6ac587ac5303da44420d8c41896e6ac9083929196f4ee227216cf3a5 \
	I1204 15:38:48.871927    9926 kubeadm.go:310] 	--control-plane 
	I1204 15:38:48.871930    9926 kubeadm.go:310] 
	I1204 15:38:48.871972    9926 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1204 15:38:48.871975    9926 kubeadm.go:310] 
	I1204 15:38:48.872026    9926 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token pkltab.sskucs47s1362brc \
	I1204 15:38:48.872087    9926 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9ed783fc6ac587ac5303da44420d8c41896e6ac9083929196f4ee227216cf3a5 
	I1204 15:38:48.872160    9926 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 15:38:48.872166    9926 cni.go:84] Creating CNI manager for ""
	I1204 15:38:48.872177    9926 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 15:38:48.875803    9926 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 15:38:48.883747    9926 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 15:38:48.887194    9926 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 15:38:48.892615    9926 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 15:38:48.892695    9926 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 15:38:48.892701    9926 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-084000 minikube.k8s.io/updated_at=2024_12_04T15_38_48_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d minikube.k8s.io/name=running-upgrade-084000 minikube.k8s.io/primary=true
	I1204 15:38:48.895995    9926 ops.go:34] apiserver oom_adj: -16
	I1204 15:38:48.927663    9926 kubeadm.go:1113] duration metric: took 35.013ms to wait for elevateKubeSystemPrivileges
	I1204 15:38:48.938440    9926 kubeadm.go:394] duration metric: took 4m11.891651875s to StartCluster
	I1204 15:38:48.938467    9926 settings.go:142] acquiring lock: {Name:mkdd110867a4c47f742f3f13d7f418d838150f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:38:48.938656    9926 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/20045-6982/kubeconfig
	I1204 15:38:48.939077    9926 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-6982/kubeconfig: {Name:mk101d59bd39dad79cc42c692d70ed55e90c94da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:38:48.939260    9926 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:38:48.939296    9926 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1204 15:38:48.939335    9926 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-084000"
	I1204 15:38:48.939346    9926 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-084000"
	W1204 15:38:48.939349    9926 addons.go:243] addon storage-provisioner should already be in state true
	I1204 15:38:48.939364    9926 host.go:66] Checking if "running-upgrade-084000" exists ...
	I1204 15:38:48.939377    9926 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-084000"
	I1204 15:38:48.939401    9926 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-084000"
	I1204 15:38:48.939454    9926 config.go:182] Loaded profile config "running-upgrade-084000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1204 15:38:48.940564    9926 kapi.go:59] client config for running-upgrade-084000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/running-upgrade-084000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/running-upgrade-084000/client.key", CAFile:"/Users/jenkins/minikube-integration/20045-6982/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10676f6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1204 15:38:48.940891    9926 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-084000"
	W1204 15:38:48.940896    9926 addons.go:243] addon default-storageclass should already be in state true
	I1204 15:38:48.940903    9926 host.go:66] Checking if "running-upgrade-084000" exists ...
	I1204 15:38:48.943772    9926 out.go:177] * Verifying Kubernetes components...
	I1204 15:38:48.944140    9926 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1204 15:38:48.947782    9926 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1204 15:38:48.947794    9926 sshutil.go:53] new ssh client: &{IP:localhost Port:61560 SSHKeyPath:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/running-upgrade-084000/id_rsa Username:docker}
	I1204 15:38:48.951662    9926 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 15:38:48.954743    9926 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:38:48.957699    9926 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 15:38:48.957707    9926 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1204 15:38:48.957712    9926 sshutil.go:53] new ssh client: &{IP:localhost Port:61560 SSHKeyPath:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/running-upgrade-084000/id_rsa Username:docker}
	I1204 15:38:49.027545    9926 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 15:38:49.032710    9926 api_server.go:52] waiting for apiserver process to appear ...
	I1204 15:38:49.032758    9926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 15:38:49.036438    9926 api_server.go:72] duration metric: took 97.16625ms to wait for apiserver process to appear ...
	I1204 15:38:49.036446    9926 api_server.go:88] waiting for apiserver healthz status ...
	I1204 15:38:49.036453    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:38:49.051995    9926 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 15:38:49.088724    9926 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1204 15:38:49.435088    9926 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1204 15:38:49.435100    9926 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1204 15:38:54.038683    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:38:54.038785    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:38:59.039449    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:38:59.039492    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:39:04.039990    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:39:04.040014    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:39:09.040673    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:39:09.040729    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:39:14.041589    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:39:14.041616    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:39:19.042990    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:39:19.043031    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1204 15:39:19.435880    9926 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1204 15:39:19.441201    9926 out.go:177] * Enabled addons: storage-provisioner
	I1204 15:39:19.448150    9926 addons.go:510] duration metric: took 30.508583958s for enable addons: enabled=[storage-provisioner]
	I1204 15:39:24.043476    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:39:24.043523    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:39:29.044019    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:39:29.044055    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:39:34.045640    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:39:34.045687    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:39:39.047267    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:39:39.047311    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:39:44.048919    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:39:44.048960    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:39:49.051251    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:39:49.051374    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:39:49.062281    9926 logs.go:282] 1 containers: [f3328f94ed0d]
	I1204 15:39:49.062356    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:39:49.073300    9926 logs.go:282] 1 containers: [17b0ed658f6c]
	I1204 15:39:49.073382    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:39:49.084533    9926 logs.go:282] 2 containers: [0a3178099d31 c8ddfa007847]
	I1204 15:39:49.084605    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:39:49.094874    9926 logs.go:282] 1 containers: [a82022fd2242]
	I1204 15:39:49.094968    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:39:49.107157    9926 logs.go:282] 1 containers: [8f7d833df5c3]
	I1204 15:39:49.107246    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:39:49.118449    9926 logs.go:282] 1 containers: [bdf1070f199f]
	I1204 15:39:49.118520    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:39:49.129302    9926 logs.go:282] 0 containers: []
	W1204 15:39:49.129312    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:39:49.129373    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:39:49.139777    9926 logs.go:282] 1 containers: [bdb47901ddb7]
	I1204 15:39:49.139795    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:39:49.139800    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:39:49.175321    9926 logs.go:123] Gathering logs for coredns [0a3178099d31] ...
	I1204 15:39:49.175336    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a3178099d31"
	I1204 15:39:49.191840    9926 logs.go:123] Gathering logs for coredns [c8ddfa007847] ...
	I1204 15:39:49.191851    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8ddfa007847"
	I1204 15:39:49.203325    9926 logs.go:123] Gathering logs for kube-scheduler [a82022fd2242] ...
	I1204 15:39:49.203339    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a82022fd2242"
	I1204 15:39:49.218223    9926 logs.go:123] Gathering logs for kube-controller-manager [bdf1070f199f] ...
	I1204 15:39:49.218234    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdf1070f199f"
	I1204 15:39:49.235891    9926 logs.go:123] Gathering logs for storage-provisioner [bdb47901ddb7] ...
	I1204 15:39:49.235901    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb47901ddb7"
	I1204 15:39:49.247115    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:39:49.247126    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:39:49.258826    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:39:49.258840    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:39:49.263235    9926 logs.go:123] Gathering logs for kube-apiserver [f3328f94ed0d] ...
	I1204 15:39:49.263245    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3328f94ed0d"
	I1204 15:39:49.281850    9926 logs.go:123] Gathering logs for etcd [17b0ed658f6c] ...
	I1204 15:39:49.281863    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b0ed658f6c"
	I1204 15:39:49.298398    9926 logs.go:123] Gathering logs for kube-proxy [8f7d833df5c3] ...
	I1204 15:39:49.298409    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d833df5c3"
	I1204 15:39:49.312450    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:39:49.312465    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:39:49.338724    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:39:49.338770    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:39:51.878758    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:39:56.881561    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:39:56.881921    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:39:56.912278    9926 logs.go:282] 1 containers: [f3328f94ed0d]
	I1204 15:39:56.912425    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:39:56.930585    9926 logs.go:282] 1 containers: [17b0ed658f6c]
	I1204 15:39:56.930697    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:39:56.944525    9926 logs.go:282] 2 containers: [0a3178099d31 c8ddfa007847]
	I1204 15:39:56.944615    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:39:56.956571    9926 logs.go:282] 1 containers: [a82022fd2242]
	I1204 15:39:56.956656    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:39:56.967455    9926 logs.go:282] 1 containers: [8f7d833df5c3]
	I1204 15:39:56.967538    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:39:56.978355    9926 logs.go:282] 1 containers: [bdf1070f199f]
	I1204 15:39:56.978437    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:39:56.989022    9926 logs.go:282] 0 containers: []
	W1204 15:39:56.989036    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:39:56.989108    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:39:57.000371    9926 logs.go:282] 1 containers: [bdb47901ddb7]
	I1204 15:39:57.000386    9926 logs.go:123] Gathering logs for kube-apiserver [f3328f94ed0d] ...
	I1204 15:39:57.000391    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3328f94ed0d"
	I1204 15:39:57.014878    9926 logs.go:123] Gathering logs for coredns [0a3178099d31] ...
	I1204 15:39:57.014892    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a3178099d31"
	I1204 15:39:57.027054    9926 logs.go:123] Gathering logs for kube-scheduler [a82022fd2242] ...
	I1204 15:39:57.027068    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a82022fd2242"
	I1204 15:39:57.042622    9926 logs.go:123] Gathering logs for kube-proxy [8f7d833df5c3] ...
	I1204 15:39:57.042633    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d833df5c3"
	I1204 15:39:57.055414    9926 logs.go:123] Gathering logs for kube-controller-manager [bdf1070f199f] ...
	I1204 15:39:57.055425    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdf1070f199f"
	I1204 15:39:57.072606    9926 logs.go:123] Gathering logs for storage-provisioner [bdb47901ddb7] ...
	I1204 15:39:57.072616    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb47901ddb7"
	I1204 15:39:57.083943    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:39:57.083955    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:39:57.123256    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:39:57.123266    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:39:57.128471    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:39:57.128479    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:39:57.153467    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:39:57.153478    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:39:57.164788    9926 logs.go:123] Gathering logs for coredns [c8ddfa007847] ...
	I1204 15:39:57.164799    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8ddfa007847"
	I1204 15:39:57.176644    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:39:57.176658    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:39:57.210473    9926 logs.go:123] Gathering logs for etcd [17b0ed658f6c] ...
	I1204 15:39:57.210486    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b0ed658f6c"
	I1204 15:39:59.729849    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:40:04.732343    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:40:04.732599    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:40:04.755475    9926 logs.go:282] 1 containers: [f3328f94ed0d]
	I1204 15:40:04.755606    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:40:04.778623    9926 logs.go:282] 1 containers: [17b0ed658f6c]
	I1204 15:40:04.778700    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:40:04.790263    9926 logs.go:282] 2 containers: [0a3178099d31 c8ddfa007847]
	I1204 15:40:04.790334    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:40:04.800553    9926 logs.go:282] 1 containers: [a82022fd2242]
	I1204 15:40:04.800638    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:40:04.814573    9926 logs.go:282] 1 containers: [8f7d833df5c3]
	I1204 15:40:04.814657    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:40:04.825085    9926 logs.go:282] 1 containers: [bdf1070f199f]
	I1204 15:40:04.825161    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:40:04.834487    9926 logs.go:282] 0 containers: []
	W1204 15:40:04.834498    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:40:04.834560    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:40:04.845290    9926 logs.go:282] 1 containers: [bdb47901ddb7]
	I1204 15:40:04.845305    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:40:04.845311    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:40:04.883556    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:40:04.883568    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:40:04.929160    9926 logs.go:123] Gathering logs for kube-apiserver [f3328f94ed0d] ...
	I1204 15:40:04.929172    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3328f94ed0d"
	I1204 15:40:04.944660    9926 logs.go:123] Gathering logs for etcd [17b0ed658f6c] ...
	I1204 15:40:04.944679    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b0ed658f6c"
	I1204 15:40:04.959157    9926 logs.go:123] Gathering logs for kube-scheduler [a82022fd2242] ...
	I1204 15:40:04.959170    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a82022fd2242"
	I1204 15:40:04.974067    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:40:04.974081    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:40:04.985806    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:40:04.985818    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:40:04.990332    9926 logs.go:123] Gathering logs for coredns [0a3178099d31] ...
	I1204 15:40:04.990342    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a3178099d31"
	I1204 15:40:05.002169    9926 logs.go:123] Gathering logs for coredns [c8ddfa007847] ...
	I1204 15:40:05.002181    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8ddfa007847"
	I1204 15:40:05.013979    9926 logs.go:123] Gathering logs for kube-proxy [8f7d833df5c3] ...
	I1204 15:40:05.013990    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d833df5c3"
	I1204 15:40:05.030307    9926 logs.go:123] Gathering logs for kube-controller-manager [bdf1070f199f] ...
	I1204 15:40:05.030319    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdf1070f199f"
	I1204 15:40:05.047838    9926 logs.go:123] Gathering logs for storage-provisioner [bdb47901ddb7] ...
	I1204 15:40:05.047848    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb47901ddb7"
	I1204 15:40:05.059665    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:40:05.059676    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:40:07.585610    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:40:12.586412    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:40:12.586636    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:40:12.604657    9926 logs.go:282] 1 containers: [f3328f94ed0d]
	I1204 15:40:12.604768    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:40:12.618629    9926 logs.go:282] 1 containers: [17b0ed658f6c]
	I1204 15:40:12.618721    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:40:12.630113    9926 logs.go:282] 2 containers: [0a3178099d31 c8ddfa007847]
	I1204 15:40:12.630196    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:40:12.645123    9926 logs.go:282] 1 containers: [a82022fd2242]
	I1204 15:40:12.645199    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:40:12.655356    9926 logs.go:282] 1 containers: [8f7d833df5c3]
	I1204 15:40:12.655443    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:40:12.665913    9926 logs.go:282] 1 containers: [bdf1070f199f]
	I1204 15:40:12.665989    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:40:12.676076    9926 logs.go:282] 0 containers: []
	W1204 15:40:12.676090    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:40:12.676153    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:40:12.686878    9926 logs.go:282] 1 containers: [bdb47901ddb7]
	I1204 15:40:12.686892    9926 logs.go:123] Gathering logs for coredns [c8ddfa007847] ...
	I1204 15:40:12.686898    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8ddfa007847"
	I1204 15:40:12.700261    9926 logs.go:123] Gathering logs for kube-scheduler [a82022fd2242] ...
	I1204 15:40:12.700272    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a82022fd2242"
	I1204 15:40:12.716930    9926 logs.go:123] Gathering logs for kube-controller-manager [bdf1070f199f] ...
	I1204 15:40:12.716941    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdf1070f199f"
	I1204 15:40:12.737771    9926 logs.go:123] Gathering logs for storage-provisioner [bdb47901ddb7] ...
	I1204 15:40:12.737783    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb47901ddb7"
	I1204 15:40:12.749143    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:40:12.749154    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:40:12.753777    9926 logs.go:123] Gathering logs for kube-apiserver [f3328f94ed0d] ...
	I1204 15:40:12.753784    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3328f94ed0d"
	I1204 15:40:12.767927    9926 logs.go:123] Gathering logs for etcd [17b0ed658f6c] ...
	I1204 15:40:12.767938    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b0ed658f6c"
	I1204 15:40:12.781601    9926 logs.go:123] Gathering logs for coredns [0a3178099d31] ...
	I1204 15:40:12.781615    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a3178099d31"
	I1204 15:40:12.793913    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:40:12.793923    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:40:12.818059    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:40:12.818068    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:40:12.830044    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:40:12.830053    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:40:12.867947    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:40:12.867955    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:40:12.908907    9926 logs.go:123] Gathering logs for kube-proxy [8f7d833df5c3] ...
	I1204 15:40:12.908922    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d833df5c3"
	I1204 15:40:15.423320    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:40:20.424250    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:40:20.424764    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:40:20.463846    9926 logs.go:282] 1 containers: [f3328f94ed0d]
	I1204 15:40:20.464004    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:40:20.484745    9926 logs.go:282] 1 containers: [17b0ed658f6c]
	I1204 15:40:20.484884    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:40:20.500083    9926 logs.go:282] 2 containers: [0a3178099d31 c8ddfa007847]
	I1204 15:40:20.500175    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:40:20.513085    9926 logs.go:282] 1 containers: [a82022fd2242]
	I1204 15:40:20.513174    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:40:20.524377    9926 logs.go:282] 1 containers: [8f7d833df5c3]
	I1204 15:40:20.524457    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:40:20.535027    9926 logs.go:282] 1 containers: [bdf1070f199f]
	I1204 15:40:20.535109    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:40:20.545514    9926 logs.go:282] 0 containers: []
	W1204 15:40:20.545524    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:40:20.545587    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:40:20.556184    9926 logs.go:282] 1 containers: [bdb47901ddb7]
	I1204 15:40:20.556201    9926 logs.go:123] Gathering logs for coredns [0a3178099d31] ...
	I1204 15:40:20.556207    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a3178099d31"
	I1204 15:40:20.570857    9926 logs.go:123] Gathering logs for kube-scheduler [a82022fd2242] ...
	I1204 15:40:20.570868    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a82022fd2242"
	I1204 15:40:20.586042    9926 logs.go:123] Gathering logs for kube-controller-manager [bdf1070f199f] ...
	I1204 15:40:20.586052    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdf1070f199f"
	I1204 15:40:20.603930    9926 logs.go:123] Gathering logs for storage-provisioner [bdb47901ddb7] ...
	I1204 15:40:20.603945    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb47901ddb7"
	I1204 15:40:20.616149    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:40:20.616160    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:40:20.652821    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:40:20.652830    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:40:20.657313    9926 logs.go:123] Gathering logs for kube-apiserver [f3328f94ed0d] ...
	I1204 15:40:20.657320    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3328f94ed0d"
	I1204 15:40:20.672320    9926 logs.go:123] Gathering logs for etcd [17b0ed658f6c] ...
	I1204 15:40:20.672333    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b0ed658f6c"
	I1204 15:40:20.687312    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:40:20.687325    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:40:20.699414    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:40:20.699424    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:40:20.741409    9926 logs.go:123] Gathering logs for coredns [c8ddfa007847] ...
	I1204 15:40:20.741421    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8ddfa007847"
	I1204 15:40:20.752829    9926 logs.go:123] Gathering logs for kube-proxy [8f7d833df5c3] ...
	I1204 15:40:20.752844    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d833df5c3"
	I1204 15:40:20.764465    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:40:20.764478    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:40:23.290247    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:40:28.292536    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:40:28.292676    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:40:28.306827    9926 logs.go:282] 1 containers: [f3328f94ed0d]
	I1204 15:40:28.306913    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:40:28.318591    9926 logs.go:282] 1 containers: [17b0ed658f6c]
	I1204 15:40:28.318672    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:40:28.329601    9926 logs.go:282] 2 containers: [0a3178099d31 c8ddfa007847]
	I1204 15:40:28.329681    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:40:28.340549    9926 logs.go:282] 1 containers: [a82022fd2242]
	I1204 15:40:28.340625    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:40:28.351079    9926 logs.go:282] 1 containers: [8f7d833df5c3]
	I1204 15:40:28.351153    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:40:28.361452    9926 logs.go:282] 1 containers: [bdf1070f199f]
	I1204 15:40:28.361529    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:40:28.372151    9926 logs.go:282] 0 containers: []
	W1204 15:40:28.372163    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:40:28.372226    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:40:28.382693    9926 logs.go:282] 1 containers: [bdb47901ddb7]
	I1204 15:40:28.382711    9926 logs.go:123] Gathering logs for kube-controller-manager [bdf1070f199f] ...
	I1204 15:40:28.382716    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdf1070f199f"
	I1204 15:40:28.399998    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:40:28.400011    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:40:28.404826    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:40:28.404833    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:40:28.438997    9926 logs.go:123] Gathering logs for kube-apiserver [f3328f94ed0d] ...
	I1204 15:40:28.439009    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3328f94ed0d"
	I1204 15:40:28.453290    9926 logs.go:123] Gathering logs for etcd [17b0ed658f6c] ...
	I1204 15:40:28.453301    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b0ed658f6c"
	I1204 15:40:28.467560    9926 logs.go:123] Gathering logs for coredns [c8ddfa007847] ...
	I1204 15:40:28.467574    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8ddfa007847"
	I1204 15:40:28.479120    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:40:28.479130    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:40:28.502530    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:40:28.502541    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:40:28.513566    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:40:28.513578    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:40:28.553100    9926 logs.go:123] Gathering logs for coredns [0a3178099d31] ...
	I1204 15:40:28.553111    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a3178099d31"
	I1204 15:40:28.564738    9926 logs.go:123] Gathering logs for kube-scheduler [a82022fd2242] ...
	I1204 15:40:28.564749    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a82022fd2242"
	I1204 15:40:28.579828    9926 logs.go:123] Gathering logs for kube-proxy [8f7d833df5c3] ...
	I1204 15:40:28.579840    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d833df5c3"
	I1204 15:40:28.592205    9926 logs.go:123] Gathering logs for storage-provisioner [bdb47901ddb7] ...
	I1204 15:40:28.592218    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb47901ddb7"
	I1204 15:40:31.105937    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:40:36.108243    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:40:36.108514    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:40:36.134302    9926 logs.go:282] 1 containers: [f3328f94ed0d]
	I1204 15:40:36.134446    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:40:36.150860    9926 logs.go:282] 1 containers: [17b0ed658f6c]
	I1204 15:40:36.150968    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:40:36.165725    9926 logs.go:282] 2 containers: [0a3178099d31 c8ddfa007847]
	I1204 15:40:36.165806    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:40:36.176885    9926 logs.go:282] 1 containers: [a82022fd2242]
	I1204 15:40:36.176969    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:40:36.187754    9926 logs.go:282] 1 containers: [8f7d833df5c3]
	I1204 15:40:36.187833    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:40:36.198768    9926 logs.go:282] 1 containers: [bdf1070f199f]
	I1204 15:40:36.198849    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:40:36.210438    9926 logs.go:282] 0 containers: []
	W1204 15:40:36.210449    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:40:36.210521    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:40:36.220934    9926 logs.go:282] 1 containers: [bdb47901ddb7]
	I1204 15:40:36.220952    9926 logs.go:123] Gathering logs for kube-scheduler [a82022fd2242] ...
	I1204 15:40:36.220959    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a82022fd2242"
	I1204 15:40:36.236909    9926 logs.go:123] Gathering logs for kube-proxy [8f7d833df5c3] ...
	I1204 15:40:36.236921    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d833df5c3"
	I1204 15:40:36.250190    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:40:36.250203    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:40:36.288586    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:40:36.288604    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:40:36.294045    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:40:36.294054    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:40:36.328139    9926 logs.go:123] Gathering logs for kube-apiserver [f3328f94ed0d] ...
	I1204 15:40:36.328152    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3328f94ed0d"
	I1204 15:40:36.348505    9926 logs.go:123] Gathering logs for etcd [17b0ed658f6c] ...
	I1204 15:40:36.348517    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b0ed658f6c"
	I1204 15:40:36.375147    9926 logs.go:123] Gathering logs for coredns [0a3178099d31] ...
	I1204 15:40:36.375159    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a3178099d31"
	I1204 15:40:36.390618    9926 logs.go:123] Gathering logs for kube-controller-manager [bdf1070f199f] ...
	I1204 15:40:36.390631    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdf1070f199f"
	I1204 15:40:36.409488    9926 logs.go:123] Gathering logs for coredns [c8ddfa007847] ...
	I1204 15:40:36.409503    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8ddfa007847"
	I1204 15:40:36.421126    9926 logs.go:123] Gathering logs for storage-provisioner [bdb47901ddb7] ...
	I1204 15:40:36.421137    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb47901ddb7"
	I1204 15:40:36.432574    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:40:36.432584    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:40:36.457856    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:40:36.457866    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:40:38.971990    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:40:43.974535    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:40:43.974803    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:40:44.001190    9926 logs.go:282] 1 containers: [f3328f94ed0d]
	I1204 15:40:44.001306    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:40:44.018839    9926 logs.go:282] 1 containers: [17b0ed658f6c]
	I1204 15:40:44.018939    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:40:44.039495    9926 logs.go:282] 2 containers: [0a3178099d31 c8ddfa007847]
	I1204 15:40:44.039580    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:40:44.051001    9926 logs.go:282] 1 containers: [a82022fd2242]
	I1204 15:40:44.051087    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:40:44.062837    9926 logs.go:282] 1 containers: [8f7d833df5c3]
	I1204 15:40:44.062923    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:40:44.074147    9926 logs.go:282] 1 containers: [bdf1070f199f]
	I1204 15:40:44.074226    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:40:44.084467    9926 logs.go:282] 0 containers: []
	W1204 15:40:44.084478    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:40:44.084546    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:40:44.095862    9926 logs.go:282] 1 containers: [bdb47901ddb7]
	I1204 15:40:44.095880    9926 logs.go:123] Gathering logs for kube-proxy [8f7d833df5c3] ...
	I1204 15:40:44.095887    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d833df5c3"
	I1204 15:40:44.107713    9926 logs.go:123] Gathering logs for kube-controller-manager [bdf1070f199f] ...
	I1204 15:40:44.107724    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdf1070f199f"
	I1204 15:40:44.125998    9926 logs.go:123] Gathering logs for storage-provisioner [bdb47901ddb7] ...
	I1204 15:40:44.126009    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb47901ddb7"
	I1204 15:40:44.139290    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:40:44.139300    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:40:44.163204    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:40:44.163216    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:40:44.175033    9926 logs.go:123] Gathering logs for kube-apiserver [f3328f94ed0d] ...
	I1204 15:40:44.175047    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3328f94ed0d"
	I1204 15:40:44.189260    9926 logs.go:123] Gathering logs for coredns [0a3178099d31] ...
	I1204 15:40:44.189270    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a3178099d31"
	I1204 15:40:44.201436    9926 logs.go:123] Gathering logs for kube-scheduler [a82022fd2242] ...
	I1204 15:40:44.201448    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a82022fd2242"
	I1204 15:40:44.216731    9926 logs.go:123] Gathering logs for etcd [17b0ed658f6c] ...
	I1204 15:40:44.216743    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b0ed658f6c"
	I1204 15:40:44.230553    9926 logs.go:123] Gathering logs for coredns [c8ddfa007847] ...
	I1204 15:40:44.230567    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8ddfa007847"
	I1204 15:40:44.242484    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:40:44.242494    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:40:44.280636    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:40:44.280652    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:40:44.285666    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:40:44.285676    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:40:46.825192    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:40:51.827452    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:40:51.827606    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:40:51.843378    9926 logs.go:282] 1 containers: [f3328f94ed0d]
	I1204 15:40:51.843479    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:40:51.855877    9926 logs.go:282] 1 containers: [17b0ed658f6c]
	I1204 15:40:51.855964    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:40:51.866360    9926 logs.go:282] 2 containers: [0a3178099d31 c8ddfa007847]
	I1204 15:40:51.866440    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:40:51.877040    9926 logs.go:282] 1 containers: [a82022fd2242]
	I1204 15:40:51.877115    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:40:51.887124    9926 logs.go:282] 1 containers: [8f7d833df5c3]
	I1204 15:40:51.887205    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:40:51.897925    9926 logs.go:282] 1 containers: [bdf1070f199f]
	I1204 15:40:51.898006    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:40:51.908306    9926 logs.go:282] 0 containers: []
	W1204 15:40:51.908320    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:40:51.908385    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:40:51.919628    9926 logs.go:282] 1 containers: [bdb47901ddb7]
	I1204 15:40:51.919644    9926 logs.go:123] Gathering logs for kube-proxy [8f7d833df5c3] ...
	I1204 15:40:51.919650    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d833df5c3"
	I1204 15:40:51.932401    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:40:51.932412    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:40:51.937579    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:40:51.937585    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:40:51.978703    9926 logs.go:123] Gathering logs for kube-apiserver [f3328f94ed0d] ...
	I1204 15:40:51.978718    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3328f94ed0d"
	I1204 15:40:51.997153    9926 logs.go:123] Gathering logs for etcd [17b0ed658f6c] ...
	I1204 15:40:51.997167    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b0ed658f6c"
	I1204 15:40:52.010681    9926 logs.go:123] Gathering logs for coredns [0a3178099d31] ...
	I1204 15:40:52.010694    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a3178099d31"
	I1204 15:40:52.022642    9926 logs.go:123] Gathering logs for coredns [c8ddfa007847] ...
	I1204 15:40:52.022655    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8ddfa007847"
	I1204 15:40:52.034291    9926 logs.go:123] Gathering logs for kube-scheduler [a82022fd2242] ...
	I1204 15:40:52.034306    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a82022fd2242"
	I1204 15:40:52.049254    9926 logs.go:123] Gathering logs for kube-controller-manager [bdf1070f199f] ...
	I1204 15:40:52.049264    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdf1070f199f"
	I1204 15:40:52.067078    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:40:52.067088    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:40:52.078487    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:40:52.078501    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:40:52.117054    9926 logs.go:123] Gathering logs for storage-provisioner [bdb47901ddb7] ...
	I1204 15:40:52.117064    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb47901ddb7"
	I1204 15:40:52.132727    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:40:52.132737    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:40:54.659590    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:40:59.661920    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:40:59.662032    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:40:59.674746    9926 logs.go:282] 1 containers: [f3328f94ed0d]
	I1204 15:40:59.674838    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:40:59.685364    9926 logs.go:282] 1 containers: [17b0ed658f6c]
	I1204 15:40:59.685448    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:40:59.696101    9926 logs.go:282] 2 containers: [0a3178099d31 c8ddfa007847]
	I1204 15:40:59.696181    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:40:59.707139    9926 logs.go:282] 1 containers: [a82022fd2242]
	I1204 15:40:59.707220    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:40:59.717397    9926 logs.go:282] 1 containers: [8f7d833df5c3]
	I1204 15:40:59.717479    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:40:59.728021    9926 logs.go:282] 1 containers: [bdf1070f199f]
	I1204 15:40:59.728098    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:40:59.738520    9926 logs.go:282] 0 containers: []
	W1204 15:40:59.738533    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:40:59.738598    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:40:59.752117    9926 logs.go:282] 1 containers: [bdb47901ddb7]
	I1204 15:40:59.752138    9926 logs.go:123] Gathering logs for coredns [0a3178099d31] ...
	I1204 15:40:59.752144    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a3178099d31"
	I1204 15:40:59.763744    9926 logs.go:123] Gathering logs for coredns [c8ddfa007847] ...
	I1204 15:40:59.763754    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8ddfa007847"
	I1204 15:40:59.776007    9926 logs.go:123] Gathering logs for kube-controller-manager [bdf1070f199f] ...
	I1204 15:40:59.776018    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdf1070f199f"
	I1204 15:40:59.798196    9926 logs.go:123] Gathering logs for storage-provisioner [bdb47901ddb7] ...
	I1204 15:40:59.798205    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb47901ddb7"
	I1204 15:40:59.812662    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:40:59.812677    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:40:59.824671    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:40:59.824682    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:40:59.859401    9926 logs.go:123] Gathering logs for kube-apiserver [f3328f94ed0d] ...
	I1204 15:40:59.859416    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3328f94ed0d"
	I1204 15:40:59.874034    9926 logs.go:123] Gathering logs for etcd [17b0ed658f6c] ...
	I1204 15:40:59.874045    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b0ed658f6c"
	I1204 15:40:59.888970    9926 logs.go:123] Gathering logs for kube-scheduler [a82022fd2242] ...
	I1204 15:40:59.888986    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a82022fd2242"
	I1204 15:40:59.904591    9926 logs.go:123] Gathering logs for kube-proxy [8f7d833df5c3] ...
	I1204 15:40:59.904605    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d833df5c3"
	I1204 15:40:59.916396    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:40:59.916408    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:40:59.942081    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:40:59.942089    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:40:59.979634    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:40:59.979657    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:41:02.485920    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:41:07.488221    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:41:07.488494    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:41:07.513011    9926 logs.go:282] 1 containers: [f3328f94ed0d]
	I1204 15:41:07.513138    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:41:07.529007    9926 logs.go:282] 1 containers: [17b0ed658f6c]
	I1204 15:41:07.529084    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:41:07.545775    9926 logs.go:282] 4 containers: [11d10ac44548 722dc28f5a86 0a3178099d31 c8ddfa007847]
	I1204 15:41:07.545845    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:41:07.559359    9926 logs.go:282] 1 containers: [a82022fd2242]
	I1204 15:41:07.559430    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:41:07.570570    9926 logs.go:282] 1 containers: [8f7d833df5c3]
	I1204 15:41:07.570637    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:41:07.581042    9926 logs.go:282] 1 containers: [bdf1070f199f]
	I1204 15:41:07.581105    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:41:07.591201    9926 logs.go:282] 0 containers: []
	W1204 15:41:07.591221    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:41:07.591279    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:41:07.602303    9926 logs.go:282] 1 containers: [bdb47901ddb7]
	I1204 15:41:07.602321    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:41:07.602327    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:41:07.637660    9926 logs.go:123] Gathering logs for etcd [17b0ed658f6c] ...
	I1204 15:41:07.637671    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b0ed658f6c"
	I1204 15:41:07.651713    9926 logs.go:123] Gathering logs for storage-provisioner [bdb47901ddb7] ...
	I1204 15:41:07.651723    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb47901ddb7"
	I1204 15:41:07.663554    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:41:07.663565    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:41:07.676756    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:41:07.676766    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:41:07.715499    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:41:07.715510    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:41:07.720064    9926 logs.go:123] Gathering logs for kube-apiserver [f3328f94ed0d] ...
	I1204 15:41:07.720075    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3328f94ed0d"
	I1204 15:41:07.734317    9926 logs.go:123] Gathering logs for coredns [722dc28f5a86] ...
	I1204 15:41:07.734329    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 722dc28f5a86"
	I1204 15:41:07.745437    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:41:07.745446    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:41:07.768810    9926 logs.go:123] Gathering logs for coredns [11d10ac44548] ...
	I1204 15:41:07.768821    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d10ac44548"
	I1204 15:41:07.779827    9926 logs.go:123] Gathering logs for coredns [c8ddfa007847] ...
	I1204 15:41:07.779838    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8ddfa007847"
	I1204 15:41:07.791712    9926 logs.go:123] Gathering logs for kube-scheduler [a82022fd2242] ...
	I1204 15:41:07.791724    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a82022fd2242"
	I1204 15:41:07.807923    9926 logs.go:123] Gathering logs for kube-proxy [8f7d833df5c3] ...
	I1204 15:41:07.807934    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d833df5c3"
	I1204 15:41:07.819804    9926 logs.go:123] Gathering logs for coredns [0a3178099d31] ...
	I1204 15:41:07.819817    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a3178099d31"
	I1204 15:41:07.832243    9926 logs.go:123] Gathering logs for kube-controller-manager [bdf1070f199f] ...
	I1204 15:41:07.832253    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdf1070f199f"
	I1204 15:41:10.352635    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:41:15.355277    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:41:15.355684    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:41:15.387166    9926 logs.go:282] 1 containers: [f3328f94ed0d]
	I1204 15:41:15.387322    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:41:15.407099    9926 logs.go:282] 1 containers: [17b0ed658f6c]
	I1204 15:41:15.407201    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:41:15.421122    9926 logs.go:282] 4 containers: [11d10ac44548 722dc28f5a86 0a3178099d31 c8ddfa007847]
	I1204 15:41:15.421217    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:41:15.432700    9926 logs.go:282] 1 containers: [a82022fd2242]
	I1204 15:41:15.432778    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:41:15.443965    9926 logs.go:282] 1 containers: [8f7d833df5c3]
	I1204 15:41:15.444045    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:41:15.454319    9926 logs.go:282] 1 containers: [bdf1070f199f]
	I1204 15:41:15.454398    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:41:15.464189    9926 logs.go:282] 0 containers: []
	W1204 15:41:15.464204    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:41:15.464278    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:41:15.474566    9926 logs.go:282] 1 containers: [bdb47901ddb7]
	I1204 15:41:15.474584    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:41:15.474589    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:41:15.498051    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:41:15.498064    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:41:15.502437    9926 logs.go:123] Gathering logs for coredns [0a3178099d31] ...
	I1204 15:41:15.502443    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a3178099d31"
	I1204 15:41:15.513730    9926 logs.go:123] Gathering logs for coredns [c8ddfa007847] ...
	I1204 15:41:15.513744    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8ddfa007847"
	I1204 15:41:15.526358    9926 logs.go:123] Gathering logs for kube-controller-manager [bdf1070f199f] ...
	I1204 15:41:15.526373    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdf1070f199f"
	I1204 15:41:15.544374    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:41:15.544385    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:41:15.583535    9926 logs.go:123] Gathering logs for coredns [722dc28f5a86] ...
	I1204 15:41:15.583547    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 722dc28f5a86"
	I1204 15:41:15.596050    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:41:15.596061    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:41:15.621198    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:41:15.621210    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:41:15.658817    9926 logs.go:123] Gathering logs for coredns [11d10ac44548] ...
	I1204 15:41:15.658831    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d10ac44548"
	I1204 15:41:15.671064    9926 logs.go:123] Gathering logs for storage-provisioner [bdb47901ddb7] ...
	I1204 15:41:15.671078    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb47901ddb7"
	I1204 15:41:15.688240    9926 logs.go:123] Gathering logs for kube-apiserver [f3328f94ed0d] ...
	I1204 15:41:15.688255    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3328f94ed0d"
	I1204 15:41:15.702515    9926 logs.go:123] Gathering logs for etcd [17b0ed658f6c] ...
	I1204 15:41:15.702527    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b0ed658f6c"
	I1204 15:41:15.716368    9926 logs.go:123] Gathering logs for kube-scheduler [a82022fd2242] ...
	I1204 15:41:15.716382    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a82022fd2242"
	I1204 15:41:15.732224    9926 logs.go:123] Gathering logs for kube-proxy [8f7d833df5c3] ...
	I1204 15:41:15.732236    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d833df5c3"
	I1204 15:41:18.246384    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:41:23.248476    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:41:23.248696    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:41:23.269511    9926 logs.go:282] 1 containers: [f3328f94ed0d]
	I1204 15:41:23.269622    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:41:23.284477    9926 logs.go:282] 1 containers: [17b0ed658f6c]
	I1204 15:41:23.284567    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:41:23.299116    9926 logs.go:282] 4 containers: [11d10ac44548 722dc28f5a86 0a3178099d31 c8ddfa007847]
	I1204 15:41:23.299197    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:41:23.309926    9926 logs.go:282] 1 containers: [a82022fd2242]
	I1204 15:41:23.310005    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:41:23.320148    9926 logs.go:282] 1 containers: [8f7d833df5c3]
	I1204 15:41:23.320227    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:41:23.331119    9926 logs.go:282] 1 containers: [bdf1070f199f]
	I1204 15:41:23.331201    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:41:23.341440    9926 logs.go:282] 0 containers: []
	W1204 15:41:23.341451    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:41:23.341527    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:41:23.351564    9926 logs.go:282] 1 containers: [bdb47901ddb7]
	I1204 15:41:23.351582    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:41:23.351589    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:41:23.363524    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:41:23.363538    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:41:23.368424    9926 logs.go:123] Gathering logs for kube-proxy [8f7d833df5c3] ...
	I1204 15:41:23.368431    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d833df5c3"
	I1204 15:41:23.380058    9926 logs.go:123] Gathering logs for etcd [17b0ed658f6c] ...
	I1204 15:41:23.380073    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b0ed658f6c"
	I1204 15:41:23.394497    9926 logs.go:123] Gathering logs for coredns [11d10ac44548] ...
	I1204 15:41:23.394510    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d10ac44548"
	I1204 15:41:23.406129    9926 logs.go:123] Gathering logs for coredns [0a3178099d31] ...
	I1204 15:41:23.406139    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a3178099d31"
	I1204 15:41:23.418115    9926 logs.go:123] Gathering logs for coredns [c8ddfa007847] ...
	I1204 15:41:23.418126    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8ddfa007847"
	I1204 15:41:23.429832    9926 logs.go:123] Gathering logs for storage-provisioner [bdb47901ddb7] ...
	I1204 15:41:23.429849    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb47901ddb7"
	I1204 15:41:23.442179    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:41:23.442193    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:41:23.465097    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:41:23.465110    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:41:23.501939    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:41:23.501949    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:41:23.537216    9926 logs.go:123] Gathering logs for kube-apiserver [f3328f94ed0d] ...
	I1204 15:41:23.537226    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3328f94ed0d"
	I1204 15:41:23.551866    9926 logs.go:123] Gathering logs for kube-scheduler [a82022fd2242] ...
	I1204 15:41:23.551880    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a82022fd2242"
	I1204 15:41:23.568223    9926 logs.go:123] Gathering logs for coredns [722dc28f5a86] ...
	I1204 15:41:23.568238    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 722dc28f5a86"
	I1204 15:41:23.582474    9926 logs.go:123] Gathering logs for kube-controller-manager [bdf1070f199f] ...
	I1204 15:41:23.582488    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdf1070f199f"
	I1204 15:41:26.104121    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:41:31.106604    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:41:31.106928    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:41:31.137856    9926 logs.go:282] 1 containers: [f3328f94ed0d]
	I1204 15:41:31.137958    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:41:31.151097    9926 logs.go:282] 1 containers: [17b0ed658f6c]
	I1204 15:41:31.151178    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:41:31.165869    9926 logs.go:282] 4 containers: [11d10ac44548 722dc28f5a86 0a3178099d31 c8ddfa007847]
	I1204 15:41:31.165943    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:41:31.176511    9926 logs.go:282] 1 containers: [a82022fd2242]
	I1204 15:41:31.176588    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:41:31.187685    9926 logs.go:282] 1 containers: [8f7d833df5c3]
	I1204 15:41:31.187765    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:41:31.198360    9926 logs.go:282] 1 containers: [bdf1070f199f]
	I1204 15:41:31.198438    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:41:31.208695    9926 logs.go:282] 0 containers: []
	W1204 15:41:31.208704    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:41:31.208771    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:41:31.220729    9926 logs.go:282] 1 containers: [bdb47901ddb7]
	I1204 15:41:31.220751    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:41:31.220758    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:41:31.236556    9926 logs.go:123] Gathering logs for coredns [11d10ac44548] ...
	I1204 15:41:31.236569    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d10ac44548"
	I1204 15:41:31.248753    9926 logs.go:123] Gathering logs for kube-scheduler [a82022fd2242] ...
	I1204 15:41:31.248768    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a82022fd2242"
	I1204 15:41:31.263450    9926 logs.go:123] Gathering logs for storage-provisioner [bdb47901ddb7] ...
	I1204 15:41:31.263460    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb47901ddb7"
	I1204 15:41:31.275292    9926 logs.go:123] Gathering logs for kube-proxy [8f7d833df5c3] ...
	I1204 15:41:31.275307    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d833df5c3"
	I1204 15:41:31.287210    9926 logs.go:123] Gathering logs for kube-controller-manager [bdf1070f199f] ...
	I1204 15:41:31.287221    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdf1070f199f"
	I1204 15:41:31.306081    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:41:31.306091    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:41:31.330926    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:41:31.330935    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:41:31.335598    9926 logs.go:123] Gathering logs for etcd [17b0ed658f6c] ...
	I1204 15:41:31.335603    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b0ed658f6c"
	I1204 15:41:31.349777    9926 logs.go:123] Gathering logs for coredns [c8ddfa007847] ...
	I1204 15:41:31.349791    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8ddfa007847"
	I1204 15:41:31.361313    9926 logs.go:123] Gathering logs for coredns [722dc28f5a86] ...
	I1204 15:41:31.361323    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 722dc28f5a86"
	I1204 15:41:31.372724    9926 logs.go:123] Gathering logs for coredns [0a3178099d31] ...
	I1204 15:41:31.372738    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a3178099d31"
	I1204 15:41:31.384640    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:41:31.384651    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:41:31.423167    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:41:31.423179    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:41:31.457522    9926 logs.go:123] Gathering logs for kube-apiserver [f3328f94ed0d] ...
	I1204 15:41:31.457537    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3328f94ed0d"
	I1204 15:41:33.974309    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:41:38.976786    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:41:38.976964    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:41:38.991848    9926 logs.go:282] 1 containers: [f3328f94ed0d]
	I1204 15:41:38.991943    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:41:39.002682    9926 logs.go:282] 1 containers: [17b0ed658f6c]
	I1204 15:41:39.002756    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:41:39.012909    9926 logs.go:282] 4 containers: [11d10ac44548 722dc28f5a86 0a3178099d31 c8ddfa007847]
	I1204 15:41:39.012993    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:41:39.023161    9926 logs.go:282] 1 containers: [a82022fd2242]
	I1204 15:41:39.023239    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:41:39.033714    9926 logs.go:282] 1 containers: [8f7d833df5c3]
	I1204 15:41:39.033791    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:41:39.044170    9926 logs.go:282] 1 containers: [bdf1070f199f]
	I1204 15:41:39.044238    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:41:39.054550    9926 logs.go:282] 0 containers: []
	W1204 15:41:39.054561    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:41:39.054626    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:41:39.064773    9926 logs.go:282] 1 containers: [bdb47901ddb7]
	I1204 15:41:39.064791    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:41:39.064797    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:41:39.069561    9926 logs.go:123] Gathering logs for coredns [11d10ac44548] ...
	I1204 15:41:39.069571    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d10ac44548"
	I1204 15:41:39.081234    9926 logs.go:123] Gathering logs for coredns [0a3178099d31] ...
	I1204 15:41:39.081244    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a3178099d31"
	I1204 15:41:39.092967    9926 logs.go:123] Gathering logs for storage-provisioner [bdb47901ddb7] ...
	I1204 15:41:39.092977    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb47901ddb7"
	I1204 15:41:39.104744    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:41:39.104755    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:41:39.129966    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:41:39.129975    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:41:39.164180    9926 logs.go:123] Gathering logs for etcd [17b0ed658f6c] ...
	I1204 15:41:39.164190    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b0ed658f6c"
	I1204 15:41:39.178025    9926 logs.go:123] Gathering logs for coredns [722dc28f5a86] ...
	I1204 15:41:39.178037    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 722dc28f5a86"
	I1204 15:41:39.189537    9926 logs.go:123] Gathering logs for kube-scheduler [a82022fd2242] ...
	I1204 15:41:39.189548    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a82022fd2242"
	I1204 15:41:39.204733    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:41:39.204742    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:41:39.243239    9926 logs.go:123] Gathering logs for kube-apiserver [f3328f94ed0d] ...
	I1204 15:41:39.243248    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3328f94ed0d"
	I1204 15:41:39.257380    9926 logs.go:123] Gathering logs for coredns [c8ddfa007847] ...
	I1204 15:41:39.257393    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8ddfa007847"
	I1204 15:41:39.269619    9926 logs.go:123] Gathering logs for kube-proxy [8f7d833df5c3] ...
	I1204 15:41:39.269633    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d833df5c3"
	I1204 15:41:39.281469    9926 logs.go:123] Gathering logs for kube-controller-manager [bdf1070f199f] ...
	I1204 15:41:39.281481    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdf1070f199f"
	I1204 15:41:39.301569    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:41:39.301580    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:41:41.815546    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:41:46.817875    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:41:46.817987    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:41:46.835111    9926 logs.go:282] 1 containers: [f3328f94ed0d]
	I1204 15:41:46.835192    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:41:46.846017    9926 logs.go:282] 1 containers: [17b0ed658f6c]
	I1204 15:41:46.846105    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:41:46.857258    9926 logs.go:282] 4 containers: [11d10ac44548 722dc28f5a86 0a3178099d31 c8ddfa007847]
	I1204 15:41:46.857342    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:41:46.867642    9926 logs.go:282] 1 containers: [a82022fd2242]
	I1204 15:41:46.867717    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:41:46.877851    9926 logs.go:282] 1 containers: [8f7d833df5c3]
	I1204 15:41:46.877935    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:41:46.888805    9926 logs.go:282] 1 containers: [bdf1070f199f]
	I1204 15:41:46.888877    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:41:46.899111    9926 logs.go:282] 0 containers: []
	W1204 15:41:46.899122    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:41:46.899193    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:41:46.909765    9926 logs.go:282] 1 containers: [bdb47901ddb7]
	I1204 15:41:46.909783    9926 logs.go:123] Gathering logs for etcd [17b0ed658f6c] ...
	I1204 15:41:46.909790    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b0ed658f6c"
	I1204 15:41:46.924139    9926 logs.go:123] Gathering logs for storage-provisioner [bdb47901ddb7] ...
	I1204 15:41:46.924151    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb47901ddb7"
	I1204 15:41:46.935907    9926 logs.go:123] Gathering logs for coredns [722dc28f5a86] ...
	I1204 15:41:46.935921    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 722dc28f5a86"
	I1204 15:41:46.957974    9926 logs.go:123] Gathering logs for coredns [c8ddfa007847] ...
	I1204 15:41:46.957986    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8ddfa007847"
	I1204 15:41:46.971748    9926 logs.go:123] Gathering logs for kube-proxy [8f7d833df5c3] ...
	I1204 15:41:46.971761    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d833df5c3"
	I1204 15:41:46.984908    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:41:46.984921    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:41:47.011282    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:41:47.011295    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:41:47.016787    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:41:47.016798    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:41:47.054467    9926 logs.go:123] Gathering logs for coredns [11d10ac44548] ...
	I1204 15:41:47.054484    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d10ac44548"
	I1204 15:41:47.069260    9926 logs.go:123] Gathering logs for coredns [0a3178099d31] ...
	I1204 15:41:47.069273    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a3178099d31"
	I1204 15:41:47.082134    9926 logs.go:123] Gathering logs for kube-scheduler [a82022fd2242] ...
	I1204 15:41:47.082146    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a82022fd2242"
	I1204 15:41:47.099254    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:41:47.099274    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:41:47.137355    9926 logs.go:123] Gathering logs for kube-apiserver [f3328f94ed0d] ...
	I1204 15:41:47.137368    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3328f94ed0d"
	I1204 15:41:47.151662    9926 logs.go:123] Gathering logs for kube-controller-manager [bdf1070f199f] ...
	I1204 15:41:47.151673    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdf1070f199f"
	I1204 15:41:47.177399    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:41:47.177411    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:41:49.692411    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:41:54.694195    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:41:54.694631    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:41:54.732906    9926 logs.go:282] 1 containers: [f3328f94ed0d]
	I1204 15:41:54.733054    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:41:54.752823    9926 logs.go:282] 1 containers: [17b0ed658f6c]
	I1204 15:41:54.752918    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:41:54.767031    9926 logs.go:282] 4 containers: [11d10ac44548 722dc28f5a86 0a3178099d31 c8ddfa007847]
	I1204 15:41:54.767123    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:41:54.782051    9926 logs.go:282] 1 containers: [a82022fd2242]
	I1204 15:41:54.782124    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:41:54.792857    9926 logs.go:282] 1 containers: [8f7d833df5c3]
	I1204 15:41:54.792927    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:41:54.803465    9926 logs.go:282] 1 containers: [bdf1070f199f]
	I1204 15:41:54.803562    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:41:54.814140    9926 logs.go:282] 0 containers: []
	W1204 15:41:54.814157    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:41:54.814242    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:41:54.825098    9926 logs.go:282] 1 containers: [bdb47901ddb7]
	I1204 15:41:54.825115    9926 logs.go:123] Gathering logs for kube-controller-manager [bdf1070f199f] ...
	I1204 15:41:54.825121    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdf1070f199f"
	I1204 15:41:54.842819    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:41:54.842834    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:41:54.855241    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:41:54.855253    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:41:54.893320    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:41:54.893330    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:41:54.898001    9926 logs.go:123] Gathering logs for kube-apiserver [f3328f94ed0d] ...
	I1204 15:41:54.898011    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3328f94ed0d"
	I1204 15:41:54.912593    9926 logs.go:123] Gathering logs for coredns [c8ddfa007847] ...
	I1204 15:41:54.912605    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8ddfa007847"
	I1204 15:41:54.923793    9926 logs.go:123] Gathering logs for coredns [722dc28f5a86] ...
	I1204 15:41:54.923807    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 722dc28f5a86"
	I1204 15:41:54.936176    9926 logs.go:123] Gathering logs for coredns [0a3178099d31] ...
	I1204 15:41:54.936187    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a3178099d31"
	I1204 15:41:54.948564    9926 logs.go:123] Gathering logs for kube-scheduler [a82022fd2242] ...
	I1204 15:41:54.948578    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a82022fd2242"
	I1204 15:41:54.966576    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:41:54.966587    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:41:54.991686    9926 logs.go:123] Gathering logs for storage-provisioner [bdb47901ddb7] ...
	I1204 15:41:54.991696    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb47901ddb7"
	I1204 15:41:55.004622    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:41:55.004633    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:41:55.039172    9926 logs.go:123] Gathering logs for etcd [17b0ed658f6c] ...
	I1204 15:41:55.039184    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b0ed658f6c"
	I1204 15:41:55.053535    9926 logs.go:123] Gathering logs for coredns [11d10ac44548] ...
	I1204 15:41:55.053546    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d10ac44548"
	I1204 15:41:55.065364    9926 logs.go:123] Gathering logs for kube-proxy [8f7d833df5c3] ...
	I1204 15:41:55.065375    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d833df5c3"
	I1204 15:41:57.579251    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:42:02.581761    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:42:02.581983    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:42:02.603560    9926 logs.go:282] 1 containers: [f3328f94ed0d]
	I1204 15:42:02.603684    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:42:02.619197    9926 logs.go:282] 1 containers: [17b0ed658f6c]
	I1204 15:42:02.619292    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:42:02.633347    9926 logs.go:282] 4 containers: [11d10ac44548 722dc28f5a86 0a3178099d31 c8ddfa007847]
	I1204 15:42:02.633436    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:42:02.649634    9926 logs.go:282] 1 containers: [a82022fd2242]
	I1204 15:42:02.649709    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:42:02.662590    9926 logs.go:282] 1 containers: [8f7d833df5c3]
	I1204 15:42:02.662660    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:42:02.673006    9926 logs.go:282] 1 containers: [bdf1070f199f]
	I1204 15:42:02.673071    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:42:02.683545    9926 logs.go:282] 0 containers: []
	W1204 15:42:02.683557    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:42:02.683623    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:42:02.693656    9926 logs.go:282] 1 containers: [bdb47901ddb7]
	I1204 15:42:02.693672    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:42:02.693678    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:42:02.730025    9926 logs.go:123] Gathering logs for etcd [17b0ed658f6c] ...
	I1204 15:42:02.730039    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b0ed658f6c"
	I1204 15:42:02.744597    9926 logs.go:123] Gathering logs for kube-scheduler [a82022fd2242] ...
	I1204 15:42:02.744612    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a82022fd2242"
	I1204 15:42:02.759892    9926 logs.go:123] Gathering logs for kube-proxy [8f7d833df5c3] ...
	I1204 15:42:02.759907    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d833df5c3"
	I1204 15:42:02.774384    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:42:02.774395    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:42:02.799258    9926 logs.go:123] Gathering logs for coredns [722dc28f5a86] ...
	I1204 15:42:02.799272    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 722dc28f5a86"
	I1204 15:42:02.812041    9926 logs.go:123] Gathering logs for coredns [0a3178099d31] ...
	I1204 15:42:02.812051    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a3178099d31"
	I1204 15:42:02.824440    9926 logs.go:123] Gathering logs for storage-provisioner [bdb47901ddb7] ...
	I1204 15:42:02.824454    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb47901ddb7"
	I1204 15:42:02.837351    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:42:02.837364    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:42:02.842220    9926 logs.go:123] Gathering logs for coredns [11d10ac44548] ...
	I1204 15:42:02.842228    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d10ac44548"
	I1204 15:42:02.854182    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:42:02.854194    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:42:02.892560    9926 logs.go:123] Gathering logs for kube-apiserver [f3328f94ed0d] ...
	I1204 15:42:02.892575    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3328f94ed0d"
	I1204 15:42:02.907263    9926 logs.go:123] Gathering logs for coredns [c8ddfa007847] ...
	I1204 15:42:02.907276    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8ddfa007847"
	I1204 15:42:02.918869    9926 logs.go:123] Gathering logs for kube-controller-manager [bdf1070f199f] ...
	I1204 15:42:02.918883    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdf1070f199f"
	I1204 15:42:02.942148    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:42:02.942160    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:42:05.456840    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:42:10.459673    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:42:10.459991    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:42:10.493955    9926 logs.go:282] 1 containers: [f3328f94ed0d]
	I1204 15:42:10.494106    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:42:10.513289    9926 logs.go:282] 1 containers: [17b0ed658f6c]
	I1204 15:42:10.513406    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:42:10.528639    9926 logs.go:282] 4 containers: [11d10ac44548 722dc28f5a86 0a3178099d31 c8ddfa007847]
	I1204 15:42:10.528730    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:42:10.540401    9926 logs.go:282] 1 containers: [a82022fd2242]
	I1204 15:42:10.540469    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:42:10.551437    9926 logs.go:282] 1 containers: [8f7d833df5c3]
	I1204 15:42:10.551514    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:42:10.562025    9926 logs.go:282] 1 containers: [bdf1070f199f]
	I1204 15:42:10.562109    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:42:10.572117    9926 logs.go:282] 0 containers: []
	W1204 15:42:10.572132    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:42:10.572205    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:42:10.582399    9926 logs.go:282] 1 containers: [bdb47901ddb7]
	I1204 15:42:10.582415    9926 logs.go:123] Gathering logs for coredns [11d10ac44548] ...
	I1204 15:42:10.582421    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d10ac44548"
	I1204 15:42:10.593889    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:42:10.593903    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:42:10.606438    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:42:10.606452    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:42:10.643661    9926 logs.go:123] Gathering logs for kube-apiserver [f3328f94ed0d] ...
	I1204 15:42:10.643671    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3328f94ed0d"
	I1204 15:42:10.658460    9926 logs.go:123] Gathering logs for coredns [722dc28f5a86] ...
	I1204 15:42:10.658472    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 722dc28f5a86"
	I1204 15:42:10.670055    9926 logs.go:123] Gathering logs for coredns [c8ddfa007847] ...
	I1204 15:42:10.670067    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8ddfa007847"
	I1204 15:42:10.687496    9926 logs.go:123] Gathering logs for storage-provisioner [bdb47901ddb7] ...
	I1204 15:42:10.687507    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb47901ddb7"
	I1204 15:42:10.699808    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:42:10.699819    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:42:10.743490    9926 logs.go:123] Gathering logs for kube-proxy [8f7d833df5c3] ...
	I1204 15:42:10.743504    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d833df5c3"
	I1204 15:42:10.755445    9926 logs.go:123] Gathering logs for kube-controller-manager [bdf1070f199f] ...
	I1204 15:42:10.755460    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdf1070f199f"
	I1204 15:42:10.773430    9926 logs.go:123] Gathering logs for kube-scheduler [a82022fd2242] ...
	I1204 15:42:10.773440    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a82022fd2242"
	I1204 15:42:10.788189    9926 logs.go:123] Gathering logs for etcd [17b0ed658f6c] ...
	I1204 15:42:10.788203    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b0ed658f6c"
	I1204 15:42:10.802067    9926 logs.go:123] Gathering logs for coredns [0a3178099d31] ...
	I1204 15:42:10.802079    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a3178099d31"
	I1204 15:42:10.814089    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:42:10.814099    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:42:10.838932    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:42:10.838943    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:42:13.346138    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:42:18.348528    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:42:18.348641    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:42:18.360037    9926 logs.go:282] 1 containers: [f3328f94ed0d]
	I1204 15:42:18.360121    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:42:18.378088    9926 logs.go:282] 1 containers: [17b0ed658f6c]
	I1204 15:42:18.378173    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:42:18.389330    9926 logs.go:282] 4 containers: [11d10ac44548 722dc28f5a86 0a3178099d31 c8ddfa007847]
	I1204 15:42:18.389412    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:42:18.405731    9926 logs.go:282] 1 containers: [a82022fd2242]
	I1204 15:42:18.405810    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:42:18.416987    9926 logs.go:282] 1 containers: [8f7d833df5c3]
	I1204 15:42:18.417070    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:42:18.430891    9926 logs.go:282] 1 containers: [bdf1070f199f]
	I1204 15:42:18.430967    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:42:18.441466    9926 logs.go:282] 0 containers: []
	W1204 15:42:18.441477    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:42:18.441544    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:42:18.452580    9926 logs.go:282] 1 containers: [bdb47901ddb7]
	I1204 15:42:18.452602    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:42:18.452609    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:42:18.458340    9926 logs.go:123] Gathering logs for kube-apiserver [f3328f94ed0d] ...
	I1204 15:42:18.458352    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3328f94ed0d"
	I1204 15:42:18.473026    9926 logs.go:123] Gathering logs for etcd [17b0ed658f6c] ...
	I1204 15:42:18.473041    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b0ed658f6c"
	I1204 15:42:18.493677    9926 logs.go:123] Gathering logs for coredns [722dc28f5a86] ...
	I1204 15:42:18.493697    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 722dc28f5a86"
	I1204 15:42:18.506196    9926 logs.go:123] Gathering logs for kube-controller-manager [bdf1070f199f] ...
	I1204 15:42:18.506209    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdf1070f199f"
	I1204 15:42:18.524611    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:42:18.524623    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:42:18.564743    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:42:18.564763    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:42:18.604864    9926 logs.go:123] Gathering logs for storage-provisioner [bdb47901ddb7] ...
	I1204 15:42:18.604877    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb47901ddb7"
	I1204 15:42:18.617195    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:42:18.617208    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:42:18.643270    9926 logs.go:123] Gathering logs for coredns [0a3178099d31] ...
	I1204 15:42:18.643284    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a3178099d31"
	I1204 15:42:18.655490    9926 logs.go:123] Gathering logs for coredns [c8ddfa007847] ...
	I1204 15:42:18.655502    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8ddfa007847"
	I1204 15:42:18.667635    9926 logs.go:123] Gathering logs for kube-scheduler [a82022fd2242] ...
	I1204 15:42:18.667646    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a82022fd2242"
	I1204 15:42:18.683286    9926 logs.go:123] Gathering logs for coredns [11d10ac44548] ...
	I1204 15:42:18.683297    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d10ac44548"
	I1204 15:42:18.702118    9926 logs.go:123] Gathering logs for kube-proxy [8f7d833df5c3] ...
	I1204 15:42:18.702129    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d833df5c3"
	I1204 15:42:18.715660    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:42:18.715672    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:42:21.230080    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:42:26.232493    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:42:26.232615    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:42:26.245330    9926 logs.go:282] 1 containers: [f3328f94ed0d]
	I1204 15:42:26.245415    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:42:26.256678    9926 logs.go:282] 1 containers: [17b0ed658f6c]
	I1204 15:42:26.256768    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:42:26.267542    9926 logs.go:282] 4 containers: [11d10ac44548 722dc28f5a86 0a3178099d31 c8ddfa007847]
	I1204 15:42:26.267625    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:42:26.278363    9926 logs.go:282] 1 containers: [a82022fd2242]
	I1204 15:42:26.278436    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:42:26.293610    9926 logs.go:282] 1 containers: [8f7d833df5c3]
	I1204 15:42:26.293687    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:42:26.304329    9926 logs.go:282] 1 containers: [bdf1070f199f]
	I1204 15:42:26.304409    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:42:26.314739    9926 logs.go:282] 0 containers: []
	W1204 15:42:26.314752    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:42:26.314819    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:42:26.325776    9926 logs.go:282] 1 containers: [bdb47901ddb7]
	I1204 15:42:26.325792    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:42:26.325797    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:42:26.330349    9926 logs.go:123] Gathering logs for coredns [0a3178099d31] ...
	I1204 15:42:26.330357    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a3178099d31"
	I1204 15:42:26.342155    9926 logs.go:123] Gathering logs for coredns [c8ddfa007847] ...
	I1204 15:42:26.342167    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8ddfa007847"
	I1204 15:42:26.355104    9926 logs.go:123] Gathering logs for kube-scheduler [a82022fd2242] ...
	I1204 15:42:26.355115    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a82022fd2242"
	I1204 15:42:26.369951    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:42:26.369961    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:42:26.381893    9926 logs.go:123] Gathering logs for etcd [17b0ed658f6c] ...
	I1204 15:42:26.381903    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b0ed658f6c"
	I1204 15:42:26.395469    9926 logs.go:123] Gathering logs for coredns [11d10ac44548] ...
	I1204 15:42:26.395480    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d10ac44548"
	I1204 15:42:26.407252    9926 logs.go:123] Gathering logs for kube-proxy [8f7d833df5c3] ...
	I1204 15:42:26.407262    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d833df5c3"
	I1204 15:42:26.418765    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:42:26.418776    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:42:26.454668    9926 logs.go:123] Gathering logs for storage-provisioner [bdb47901ddb7] ...
	I1204 15:42:26.454682    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb47901ddb7"
	I1204 15:42:26.466699    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:42:26.466712    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:42:26.503053    9926 logs.go:123] Gathering logs for kube-apiserver [f3328f94ed0d] ...
	I1204 15:42:26.503064    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3328f94ed0d"
	I1204 15:42:26.517412    9926 logs.go:123] Gathering logs for coredns [722dc28f5a86] ...
	I1204 15:42:26.517426    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 722dc28f5a86"
	I1204 15:42:26.533464    9926 logs.go:123] Gathering logs for kube-controller-manager [bdf1070f199f] ...
	I1204 15:42:26.533475    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdf1070f199f"
	I1204 15:42:26.551880    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:42:26.551892    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:42:29.078589    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:42:34.081017    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:42:34.081133    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:42:34.091751    9926 logs.go:282] 1 containers: [f3328f94ed0d]
	I1204 15:42:34.091834    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:42:34.106224    9926 logs.go:282] 1 containers: [17b0ed658f6c]
	I1204 15:42:34.106296    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:42:34.116847    9926 logs.go:282] 4 containers: [11d10ac44548 722dc28f5a86 0a3178099d31 c8ddfa007847]
	I1204 15:42:34.116917    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:42:34.128733    9926 logs.go:282] 1 containers: [a82022fd2242]
	I1204 15:42:34.128811    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:42:34.139682    9926 logs.go:282] 1 containers: [8f7d833df5c3]
	I1204 15:42:34.139759    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:42:34.150937    9926 logs.go:282] 1 containers: [bdf1070f199f]
	I1204 15:42:34.151007    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:42:34.161129    9926 logs.go:282] 0 containers: []
	W1204 15:42:34.161142    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:42:34.161216    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:42:34.171526    9926 logs.go:282] 1 containers: [bdb47901ddb7]
	I1204 15:42:34.171544    9926 logs.go:123] Gathering logs for kube-controller-manager [bdf1070f199f] ...
	I1204 15:42:34.171550    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdf1070f199f"
	I1204 15:42:34.190145    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:42:34.190159    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:42:34.202454    9926 logs.go:123] Gathering logs for kube-apiserver [f3328f94ed0d] ...
	I1204 15:42:34.202470    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3328f94ed0d"
	I1204 15:42:34.216602    9926 logs.go:123] Gathering logs for coredns [c8ddfa007847] ...
	I1204 15:42:34.216614    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8ddfa007847"
	I1204 15:42:34.230453    9926 logs.go:123] Gathering logs for storage-provisioner [bdb47901ddb7] ...
	I1204 15:42:34.230465    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb47901ddb7"
	I1204 15:42:34.242363    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:42:34.242373    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:42:34.246826    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:42:34.246837    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:42:34.283344    9926 logs.go:123] Gathering logs for etcd [17b0ed658f6c] ...
	I1204 15:42:34.283354    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b0ed658f6c"
	I1204 15:42:34.299456    9926 logs.go:123] Gathering logs for coredns [722dc28f5a86] ...
	I1204 15:42:34.299467    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 722dc28f5a86"
	I1204 15:42:34.311356    9926 logs.go:123] Gathering logs for kube-scheduler [a82022fd2242] ...
	I1204 15:42:34.311368    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a82022fd2242"
	I1204 15:42:34.327141    9926 logs.go:123] Gathering logs for kube-proxy [8f7d833df5c3] ...
	I1204 15:42:34.327154    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d833df5c3"
	I1204 15:42:34.338921    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:42:34.338931    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:42:34.362374    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:42:34.362382    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:42:34.399625    9926 logs.go:123] Gathering logs for coredns [0a3178099d31] ...
	I1204 15:42:34.399640    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a3178099d31"
	I1204 15:42:34.411489    9926 logs.go:123] Gathering logs for coredns [11d10ac44548] ...
	I1204 15:42:34.411502    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d10ac44548"
	I1204 15:42:36.925046    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:42:41.927311    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:42:41.927560    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:42:41.951099    9926 logs.go:282] 1 containers: [f3328f94ed0d]
	I1204 15:42:41.951218    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:42:41.966378    9926 logs.go:282] 1 containers: [17b0ed658f6c]
	I1204 15:42:41.966469    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:42:41.979318    9926 logs.go:282] 4 containers: [11d10ac44548 722dc28f5a86 0a3178099d31 c8ddfa007847]
	I1204 15:42:41.979406    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:42:41.995352    9926 logs.go:282] 1 containers: [a82022fd2242]
	I1204 15:42:41.995433    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:42:42.010160    9926 logs.go:282] 1 containers: [8f7d833df5c3]
	I1204 15:42:42.010232    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:42:42.020826    9926 logs.go:282] 1 containers: [bdf1070f199f]
	I1204 15:42:42.020903    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:42:42.035457    9926 logs.go:282] 0 containers: []
	W1204 15:42:42.035469    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:42:42.035540    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:42:42.045989    9926 logs.go:282] 1 containers: [bdb47901ddb7]
	I1204 15:42:42.046006    9926 logs.go:123] Gathering logs for kube-apiserver [f3328f94ed0d] ...
	I1204 15:42:42.046011    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3328f94ed0d"
	I1204 15:42:42.062128    9926 logs.go:123] Gathering logs for coredns [0a3178099d31] ...
	I1204 15:42:42.062142    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a3178099d31"
	I1204 15:42:42.077367    9926 logs.go:123] Gathering logs for coredns [c8ddfa007847] ...
	I1204 15:42:42.077381    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8ddfa007847"
	I1204 15:42:42.088614    9926 logs.go:123] Gathering logs for storage-provisioner [bdb47901ddb7] ...
	I1204 15:42:42.088626    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb47901ddb7"
	I1204 15:42:42.100066    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:42:42.100080    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:42:42.134423    9926 logs.go:123] Gathering logs for coredns [11d10ac44548] ...
	I1204 15:42:42.134434    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d10ac44548"
	I1204 15:42:42.145785    9926 logs.go:123] Gathering logs for kube-scheduler [a82022fd2242] ...
	I1204 15:42:42.145797    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a82022fd2242"
	I1204 15:42:42.160841    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:42:42.160854    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:42:42.183575    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:42:42.183584    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:42:42.196190    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:42:42.196200    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:42:42.232867    9926 logs.go:123] Gathering logs for kube-controller-manager [bdf1070f199f] ...
	I1204 15:42:42.232875    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdf1070f199f"
	I1204 15:42:42.250372    9926 logs.go:123] Gathering logs for coredns [722dc28f5a86] ...
	I1204 15:42:42.250385    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 722dc28f5a86"
	I1204 15:42:42.266143    9926 logs.go:123] Gathering logs for etcd [17b0ed658f6c] ...
	I1204 15:42:42.266153    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b0ed658f6c"
	I1204 15:42:42.280248    9926 logs.go:123] Gathering logs for kube-proxy [8f7d833df5c3] ...
	I1204 15:42:42.280259    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d833df5c3"
	I1204 15:42:42.293102    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:42:42.293113    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:42:44.798417    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:42:49.800735    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:42:49.805870    9926 out.go:201] 
	W1204 15:42:49.809901    9926 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1204 15:42:49.809908    9926 out.go:270] * 
	* 
	W1204 15:42:49.810344    9926 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 15:42:49.823806    9926 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-084000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:629: *** TestRunningBinaryUpgrade FAILED at 2024-12-04 15:42:49.911127 -0800 PST m=+1314.932927668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-084000 -n running-upgrade-084000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-084000 -n running-upgrade-084000: exit status 2 (15.698150167s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-084000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-064000          | force-systemd-flag-064000 | jenkins | v1.34.0 | 04 Dec 24 15:33 PST |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-829000              | force-systemd-env-829000  | jenkins | v1.34.0 | 04 Dec 24 15:33 PST |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-829000           | force-systemd-env-829000  | jenkins | v1.34.0 | 04 Dec 24 15:33 PST | 04 Dec 24 15:33 PST |
	| start   | -p docker-flags-438000                | docker-flags-438000       | jenkins | v1.34.0 | 04 Dec 24 15:33 PST |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-064000             | force-systemd-flag-064000 | jenkins | v1.34.0 | 04 Dec 24 15:33 PST |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-064000          | force-systemd-flag-064000 | jenkins | v1.34.0 | 04 Dec 24 15:33 PST | 04 Dec 24 15:33 PST |
	| start   | -p cert-expiration-397000             | cert-expiration-397000    | jenkins | v1.34.0 | 04 Dec 24 15:33 PST |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-438000 ssh               | docker-flags-438000       | jenkins | v1.34.0 | 04 Dec 24 15:33 PST |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-438000 ssh               | docker-flags-438000       | jenkins | v1.34.0 | 04 Dec 24 15:33 PST |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-438000                | docker-flags-438000       | jenkins | v1.34.0 | 04 Dec 24 15:33 PST | 04 Dec 24 15:33 PST |
	| start   | -p cert-options-100000                | cert-options-100000       | jenkins | v1.34.0 | 04 Dec 24 15:33 PST |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-100000 ssh               | cert-options-100000       | jenkins | v1.34.0 | 04 Dec 24 15:33 PST |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-100000 -- sudo        | cert-options-100000       | jenkins | v1.34.0 | 04 Dec 24 15:33 PST |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-100000                | cert-options-100000       | jenkins | v1.34.0 | 04 Dec 24 15:33 PST | 04 Dec 24 15:33 PST |
	| start   | -p running-upgrade-084000             | minikube                  | jenkins | v1.26.0 | 04 Dec 24 15:33 PST | 04 Dec 24 15:34 PST |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-084000             | running-upgrade-084000    | jenkins | v1.34.0 | 04 Dec 24 15:34 PST |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-397000             | cert-expiration-397000    | jenkins | v1.34.0 | 04 Dec 24 15:36 PST |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-397000             | cert-expiration-397000    | jenkins | v1.34.0 | 04 Dec 24 15:36 PST | 04 Dec 24 15:36 PST |
	| start   | -p kubernetes-upgrade-989000          | kubernetes-upgrade-989000 | jenkins | v1.34.0 | 04 Dec 24 15:36 PST |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-989000          | kubernetes-upgrade-989000 | jenkins | v1.34.0 | 04 Dec 24 15:36 PST | 04 Dec 24 15:36 PST |
	| start   | -p kubernetes-upgrade-989000          | kubernetes-upgrade-989000 | jenkins | v1.34.0 | 04 Dec 24 15:36 PST |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-989000          | kubernetes-upgrade-989000 | jenkins | v1.34.0 | 04 Dec 24 15:36 PST | 04 Dec 24 15:36 PST |
	| start   | -p stopped-upgrade-377000             | minikube                  | jenkins | v1.26.0 | 04 Dec 24 15:36 PST | 04 Dec 24 15:37 PST |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-377000 stop           | minikube                  | jenkins | v1.26.0 | 04 Dec 24 15:37 PST | 04 Dec 24 15:37 PST |
	| start   | -p stopped-upgrade-377000             | stopped-upgrade-377000    | jenkins | v1.34.0 | 04 Dec 24 15:37 PST |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 15:37:41
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.2 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 15:37:41.779892   10206 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:37:41.780089   10206 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:37:41.780094   10206 out.go:358] Setting ErrFile to fd 2...
	I1204 15:37:41.780096   10206 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:37:41.780250   10206 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:37:41.781433   10206 out.go:352] Setting JSON to false
	I1204 15:37:41.801300   10206 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5831,"bootTime":1733349630,"procs":549,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 15:37:41.801407   10206 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:37:38.307304    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:37:38.307581    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:37:38.329002    9926 logs.go:282] 2 containers: [9bb28cca02e8 e4f90e1b9024]
	I1204 15:37:38.329121    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:37:38.346014    9926 logs.go:282] 2 containers: [a32a7806b496 ca0c907ad43c]
	I1204 15:37:38.346101    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:37:38.360685    9926 logs.go:282] 1 containers: [08e34a589c88]
	I1204 15:37:38.360772    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:37:38.371177    9926 logs.go:282] 2 containers: [7fbf81020cc8 f18e443b8788]
	I1204 15:37:38.371251    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:37:38.381918    9926 logs.go:282] 1 containers: [5620f978e468]
	I1204 15:37:38.382003    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:37:38.393741    9926 logs.go:282] 2 containers: [d8b20ca47793 e94aa66fc745]
	I1204 15:37:38.393824    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:37:38.411894    9926 logs.go:282] 0 containers: []
	W1204 15:37:38.411908    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:37:38.411972    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:37:38.422085    9926 logs.go:282] 0 containers: []
	W1204 15:37:38.422098    9926 logs.go:284] No container was found matching "storage-provisioner"
	I1204 15:37:38.422106    9926 logs.go:123] Gathering logs for kube-scheduler [f18e443b8788] ...
	I1204 15:37:38.422112    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f18e443b8788"
	I1204 15:37:38.438044    9926 logs.go:123] Gathering logs for kube-apiserver [9bb28cca02e8] ...
	I1204 15:37:38.438054    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb28cca02e8"
	I1204 15:37:38.457067    9926 logs.go:123] Gathering logs for etcd [ca0c907ad43c] ...
	I1204 15:37:38.457081    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca0c907ad43c"
	I1204 15:37:38.475103    9926 logs.go:123] Gathering logs for kube-scheduler [7fbf81020cc8] ...
	I1204 15:37:38.475115    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fbf81020cc8"
	I1204 15:37:38.493773    9926 logs.go:123] Gathering logs for kube-controller-manager [d8b20ca47793] ...
	I1204 15:37:38.493786    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8b20ca47793"
	I1204 15:37:38.510889    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:37:38.510901    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:37:38.522516    9926 logs.go:123] Gathering logs for etcd [a32a7806b496] ...
	I1204 15:37:38.522527    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32a7806b496"
	I1204 15:37:38.537058    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:37:38.537068    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:37:38.570339    9926 logs.go:123] Gathering logs for kube-apiserver [e4f90e1b9024] ...
	I1204 15:37:38.570354    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4f90e1b9024"
	I1204 15:37:38.589143    9926 logs.go:123] Gathering logs for coredns [08e34a589c88] ...
	I1204 15:37:38.589154    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e34a589c88"
	I1204 15:37:38.599919    9926 logs.go:123] Gathering logs for kube-proxy [5620f978e468] ...
	I1204 15:37:38.599931    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5620f978e468"
	I1204 15:37:38.611638    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:37:38.611651    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:37:38.615944    9926 logs.go:123] Gathering logs for kube-controller-manager [e94aa66fc745] ...
	I1204 15:37:38.615954    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94aa66fc745"
	I1204 15:37:38.627184    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:37:38.627197    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:37:38.652253    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:37:38.652262    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:37:41.191015    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:37:41.805942   10206 out.go:177] * [stopped-upgrade-377000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 15:37:41.816847   10206 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 15:37:41.816851   10206 notify.go:220] Checking for updates...
	I1204 15:37:41.824813   10206 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	I1204 15:37:41.828794   10206 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 15:37:41.832832   10206 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:37:41.835818   10206 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	I1204 15:37:41.839840   10206 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 15:37:41.844087   10206 config.go:182] Loaded profile config "stopped-upgrade-377000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1204 15:37:41.848813   10206 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1204 15:37:41.852814   10206 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:37:41.856791   10206 out.go:177] * Using the qemu2 driver based on existing profile
	I1204 15:37:41.864814   10206 start.go:297] selected driver: qemu2
	I1204 15:37:41.864820   10206 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-377000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:61834 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-377000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1204 15:37:41.864863   10206 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 15:37:41.867855   10206 cni.go:84] Creating CNI manager for ""
	I1204 15:37:41.867885   10206 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 15:37:41.867923   10206 start.go:340] cluster config:
	{Name:stopped-upgrade-377000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:61834 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-377000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1204 15:37:41.867986   10206 iso.go:125] acquiring lock: {Name:mk798ab7885b095925d55c843b0600e7ea181045 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:37:41.876880   10206 out.go:177] * Starting "stopped-upgrade-377000" primary control-plane node in "stopped-upgrade-377000" cluster
	I1204 15:37:41.880840   10206 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1204 15:37:41.880866   10206 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1204 15:37:41.880877   10206 cache.go:56] Caching tarball of preloaded images
	I1204 15:37:41.880957   10206 preload.go:172] Found /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 15:37:41.880964   10206 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1204 15:37:41.881049   10206 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/stopped-upgrade-377000/config.json ...
	I1204 15:37:41.881610   10206 start.go:360] acquireMachinesLock for stopped-upgrade-377000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:37:41.881642   10206 start.go:364] duration metric: took 24.75µs to acquireMachinesLock for "stopped-upgrade-377000"
	I1204 15:37:41.881653   10206 start.go:96] Skipping create...Using existing machine configuration
	I1204 15:37:41.881657   10206 fix.go:54] fixHost starting: 
	I1204 15:37:41.881772   10206 fix.go:112] recreateIfNeeded on stopped-upgrade-377000: state=Stopped err=<nil>
	W1204 15:37:41.881782   10206 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 15:37:41.886815   10206 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-377000" ...
	I1204 15:37:41.894753   10206 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:37:41.894831   10206 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/stopped-upgrade-377000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/stopped-upgrade-377000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/stopped-upgrade-377000/qemu.pid -nic user,model=virtio,hostfwd=tcp::61799-:22,hostfwd=tcp::61800-:2376,hostname=stopped-upgrade-377000 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/stopped-upgrade-377000/disk.qcow2
	I1204 15:37:41.941994   10206 main.go:141] libmachine: STDOUT: 
	I1204 15:37:41.942020   10206 main.go:141] libmachine: STDERR: 
	I1204 15:37:41.942030   10206 main.go:141] libmachine: Waiting for VM to start (ssh -p 61799 docker@127.0.0.1)...
	I1204 15:37:46.193553    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:37:46.194149    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:37:46.241037    9926 logs.go:282] 2 containers: [9bb28cca02e8 e4f90e1b9024]
	I1204 15:37:46.241248    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:37:46.260346    9926 logs.go:282] 2 containers: [a32a7806b496 ca0c907ad43c]
	I1204 15:37:46.260450    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:37:46.274343    9926 logs.go:282] 1 containers: [08e34a589c88]
	I1204 15:37:46.274440    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:37:46.286789    9926 logs.go:282] 2 containers: [7fbf81020cc8 f18e443b8788]
	I1204 15:37:46.286866    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:37:46.302165    9926 logs.go:282] 1 containers: [5620f978e468]
	I1204 15:37:46.302244    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:37:46.313316    9926 logs.go:282] 2 containers: [d8b20ca47793 e94aa66fc745]
	I1204 15:37:46.313394    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:37:46.323858    9926 logs.go:282] 0 containers: []
	W1204 15:37:46.323870    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:37:46.323944    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:37:46.334465    9926 logs.go:282] 0 containers: []
	W1204 15:37:46.334476    9926 logs.go:284] No container was found matching "storage-provisioner"
	I1204 15:37:46.334487    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:37:46.334493    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:37:46.339328    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:37:46.339338    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:37:46.377269    9926 logs.go:123] Gathering logs for kube-apiserver [9bb28cca02e8] ...
	I1204 15:37:46.377280    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb28cca02e8"
	I1204 15:37:46.395936    9926 logs.go:123] Gathering logs for etcd [ca0c907ad43c] ...
	I1204 15:37:46.395950    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca0c907ad43c"
	I1204 15:37:46.419661    9926 logs.go:123] Gathering logs for kube-controller-manager [d8b20ca47793] ...
	I1204 15:37:46.419678    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8b20ca47793"
	I1204 15:37:46.437420    9926 logs.go:123] Gathering logs for kube-apiserver [e4f90e1b9024] ...
	I1204 15:37:46.437431    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4f90e1b9024"
	I1204 15:37:46.457080    9926 logs.go:123] Gathering logs for kube-scheduler [7fbf81020cc8] ...
	I1204 15:37:46.457093    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fbf81020cc8"
	I1204 15:37:46.473121    9926 logs.go:123] Gathering logs for kube-scheduler [f18e443b8788] ...
	I1204 15:37:46.473134    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f18e443b8788"
	I1204 15:37:46.496366    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:37:46.496378    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:37:46.519649    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:37:46.519659    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:37:46.533996    9926 logs.go:123] Gathering logs for kube-proxy [5620f978e468] ...
	I1204 15:37:46.534009    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5620f978e468"
	I1204 15:37:46.545758    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:37:46.545772    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:37:46.582953    9926 logs.go:123] Gathering logs for etcd [a32a7806b496] ...
	I1204 15:37:46.582963    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32a7806b496"
	I1204 15:37:46.596804    9926 logs.go:123] Gathering logs for coredns [08e34a589c88] ...
	I1204 15:37:46.596816    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e34a589c88"
	I1204 15:37:46.607939    9926 logs.go:123] Gathering logs for kube-controller-manager [e94aa66fc745] ...
	I1204 15:37:46.607952    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94aa66fc745"
	I1204 15:37:49.124544    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:37:54.125458    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:37:54.125603    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:37:54.140032    9926 logs.go:282] 2 containers: [9bb28cca02e8 e4f90e1b9024]
	I1204 15:37:54.140124    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:37:54.151880    9926 logs.go:282] 2 containers: [a32a7806b496 ca0c907ad43c]
	I1204 15:37:54.151966    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:37:54.163054    9926 logs.go:282] 1 containers: [08e34a589c88]
	I1204 15:37:54.163136    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:37:54.176613    9926 logs.go:282] 2 containers: [7fbf81020cc8 f18e443b8788]
	I1204 15:37:54.176719    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:37:54.188193    9926 logs.go:282] 1 containers: [5620f978e468]
	I1204 15:37:54.188333    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:37:54.200012    9926 logs.go:282] 2 containers: [d8b20ca47793 e94aa66fc745]
	I1204 15:37:54.200088    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:37:54.210944    9926 logs.go:282] 0 containers: []
	W1204 15:37:54.210954    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:37:54.211025    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:37:54.221452    9926 logs.go:282] 0 containers: []
	W1204 15:37:54.221463    9926 logs.go:284] No container was found matching "storage-provisioner"
	I1204 15:37:54.221473    9926 logs.go:123] Gathering logs for kube-apiserver [e4f90e1b9024] ...
	I1204 15:37:54.221479    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4f90e1b9024"
	I1204 15:37:54.241005    9926 logs.go:123] Gathering logs for kube-scheduler [f18e443b8788] ...
	I1204 15:37:54.241016    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f18e443b8788"
	I1204 15:37:54.257078    9926 logs.go:123] Gathering logs for kube-controller-manager [d8b20ca47793] ...
	I1204 15:37:54.257090    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8b20ca47793"
	I1204 15:37:54.275524    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:37:54.275538    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:37:54.314059    9926 logs.go:123] Gathering logs for etcd [a32a7806b496] ...
	I1204 15:37:54.314074    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32a7806b496"
	I1204 15:37:54.329137    9926 logs.go:123] Gathering logs for etcd [ca0c907ad43c] ...
	I1204 15:37:54.329150    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca0c907ad43c"
	I1204 15:37:54.348198    9926 logs.go:123] Gathering logs for kube-controller-manager [e94aa66fc745] ...
	I1204 15:37:54.348211    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94aa66fc745"
	I1204 15:37:54.360228    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:37:54.360239    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:37:54.373685    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:37:54.373698    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:37:54.379139    9926 logs.go:123] Gathering logs for kube-apiserver [9bb28cca02e8] ...
	I1204 15:37:54.379151    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb28cca02e8"
	I1204 15:37:54.393517    9926 logs.go:123] Gathering logs for kube-scheduler [7fbf81020cc8] ...
	I1204 15:37:54.393532    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fbf81020cc8"
	I1204 15:37:54.410574    9926 logs.go:123] Gathering logs for kube-proxy [5620f978e468] ...
	I1204 15:37:54.410585    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5620f978e468"
	I1204 15:37:54.422859    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:37:54.422874    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:37:54.447647    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:37:54.447655    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:37:54.483378    9926 logs.go:123] Gathering logs for coredns [08e34a589c88] ...
	I1204 15:37:54.483393    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e34a589c88"
	I1204 15:37:56.997328    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:38:02.122757   10206 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/stopped-upgrade-377000/config.json ...
	I1204 15:38:02.123007   10206 machine.go:93] provisionDockerMachine start ...
	I1204 15:38:02.123070   10206 main.go:141] libmachine: Using SSH client type: native
	I1204 15:38:02.123196   10206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102902f60] 0x1029057a0 <nil>  [] 0s} localhost 61799 <nil> <nil>}
	I1204 15:38:02.123202   10206 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 15:38:02.188194   10206 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1204 15:38:02.188213   10206 buildroot.go:166] provisioning hostname "stopped-upgrade-377000"
	I1204 15:38:02.188304   10206 main.go:141] libmachine: Using SSH client type: native
	I1204 15:38:02.188436   10206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102902f60] 0x1029057a0 <nil>  [] 0s} localhost 61799 <nil> <nil>}
	I1204 15:38:02.188442   10206 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-377000 && echo "stopped-upgrade-377000" | sudo tee /etc/hostname
	I1204 15:38:02.259418   10206 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-377000
	
	I1204 15:38:02.259489   10206 main.go:141] libmachine: Using SSH client type: native
	I1204 15:38:02.259607   10206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102902f60] 0x1029057a0 <nil>  [] 0s} localhost 61799 <nil> <nil>}
	I1204 15:38:02.259616   10206 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-377000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-377000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-377000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 15:38:02.326616   10206 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 15:38:02.326632   10206 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20045-6982/.minikube CaCertPath:/Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20045-6982/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20045-6982/.minikube}
	I1204 15:38:02.326642   10206 buildroot.go:174] setting up certificates
	I1204 15:38:02.326647   10206 provision.go:84] configureAuth start
	I1204 15:38:02.326656   10206 provision.go:143] copyHostCerts
	I1204 15:38:02.326728   10206 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-6982/.minikube/ca.pem, removing ...
	I1204 15:38:02.326737   10206 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-6982/.minikube/ca.pem
	I1204 15:38:02.326831   10206 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20045-6982/.minikube/ca.pem (1078 bytes)
	I1204 15:38:02.327024   10206 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-6982/.minikube/cert.pem, removing ...
	I1204 15:38:02.327029   10206 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-6982/.minikube/cert.pem
	I1204 15:38:02.327071   10206 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20045-6982/.minikube/cert.pem (1123 bytes)
	I1204 15:38:02.327177   10206 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-6982/.minikube/key.pem, removing ...
	I1204 15:38:02.327181   10206 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-6982/.minikube/key.pem
	I1204 15:38:02.327218   10206 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20045-6982/.minikube/key.pem (1679 bytes)
	I1204 15:38:02.327312   10206 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-377000 san=[127.0.0.1 localhost minikube stopped-upgrade-377000]
	I1204 15:38:02.403905   10206 provision.go:177] copyRemoteCerts
	I1204 15:38:02.403969   10206 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 15:38:02.403978   10206 sshutil.go:53] new ssh client: &{IP:localhost Port:61799 SSHKeyPath:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/stopped-upgrade-377000/id_rsa Username:docker}
	I1204 15:38:02.437666   10206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 15:38:02.444445   10206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1204 15:38:02.451008   10206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1204 15:38:02.458132   10206 provision.go:87] duration metric: took 131.472458ms to configureAuth
	I1204 15:38:02.458140   10206 buildroot.go:189] setting minikube options for container-runtime
	I1204 15:38:02.458235   10206 config.go:182] Loaded profile config "stopped-upgrade-377000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1204 15:38:02.458293   10206 main.go:141] libmachine: Using SSH client type: native
	I1204 15:38:02.458380   10206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102902f60] 0x1029057a0 <nil>  [] 0s} localhost 61799 <nil> <nil>}
	I1204 15:38:02.458385   10206 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1204 15:38:02.523495   10206 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1204 15:38:02.523503   10206 buildroot.go:70] root file system type: tmpfs
	I1204 15:38:02.523555   10206 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1204 15:38:02.523605   10206 main.go:141] libmachine: Using SSH client type: native
	I1204 15:38:02.523706   10206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102902f60] 0x1029057a0 <nil>  [] 0s} localhost 61799 <nil> <nil>}
	I1204 15:38:02.523740   10206 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1204 15:38:02.591381   10206 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1204 15:38:02.591444   10206 main.go:141] libmachine: Using SSH client type: native
	I1204 15:38:02.591551   10206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102902f60] 0x1029057a0 <nil>  [] 0s} localhost 61799 <nil> <nil>}
	I1204 15:38:02.591563   10206 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1204 15:38:02.974752   10206 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1204 15:38:02.974766   10206 machine.go:96] duration metric: took 851.744583ms to provisionDockerMachine
	I1204 15:38:02.974775   10206 start.go:293] postStartSetup for "stopped-upgrade-377000" (driver="qemu2")
	I1204 15:38:02.974781   10206 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 15:38:02.974857   10206 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 15:38:02.974869   10206 sshutil.go:53] new ssh client: &{IP:localhost Port:61799 SSHKeyPath:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/stopped-upgrade-377000/id_rsa Username:docker}
	I1204 15:38:03.009529   10206 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 15:38:03.010839   10206 info.go:137] Remote host: Buildroot 2021.02.12
	I1204 15:38:03.010847   10206 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20045-6982/.minikube/addons for local assets ...
	I1204 15:38:03.010927   10206 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20045-6982/.minikube/files for local assets ...
	I1204 15:38:03.011018   10206 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20045-6982/.minikube/files/etc/ssl/certs/74952.pem -> 74952.pem in /etc/ssl/certs
	I1204 15:38:03.011123   10206 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 15:38:03.014156   10206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/files/etc/ssl/certs/74952.pem --> /etc/ssl/certs/74952.pem (1708 bytes)
	I1204 15:38:03.021344   10206 start.go:296] duration metric: took 46.563459ms for postStartSetup
	I1204 15:38:03.021357   10206 fix.go:56] duration metric: took 21.139505708s for fixHost
	I1204 15:38:03.021404   10206 main.go:141] libmachine: Using SSH client type: native
	I1204 15:38:03.021505   10206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102902f60] 0x1029057a0 <nil>  [] 0s} localhost 61799 <nil> <nil>}
	I1204 15:38:03.021509   10206 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 15:38:03.087300   10206 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733355483.344829629
	
	I1204 15:38:03.087310   10206 fix.go:216] guest clock: 1733355483.344829629
	I1204 15:38:03.087314   10206 fix.go:229] Guest: 2024-12-04 15:38:03.344829629 -0800 PST Remote: 2024-12-04 15:38:03.021359 -0800 PST m=+21.273255293 (delta=323.470629ms)
	I1204 15:38:03.087325   10206 fix.go:200] guest clock delta is within tolerance: 323.470629ms
	I1204 15:38:03.087327   10206 start.go:83] releasing machines lock for "stopped-upgrade-377000", held for 21.205485417s
	I1204 15:38:03.087410   10206 ssh_runner.go:195] Run: cat /version.json
	I1204 15:38:03.087420   10206 sshutil.go:53] new ssh client: &{IP:localhost Port:61799 SSHKeyPath:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/stopped-upgrade-377000/id_rsa Username:docker}
	I1204 15:38:03.087410   10206 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 15:38:03.087451   10206 sshutil.go:53] new ssh client: &{IP:localhost Port:61799 SSHKeyPath:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/stopped-upgrade-377000/id_rsa Username:docker}
	W1204 15:38:03.087998   10206 sshutil.go:64] dial failure (will retry): dial tcp [::1]:61799: connect: connection refused
	I1204 15:38:03.088017   10206 retry.go:31] will retry after 257.8977ms: dial tcp [::1]:61799: connect: connection refused
	W1204 15:38:03.119309   10206 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1204 15:38:03.119364   10206 ssh_runner.go:195] Run: systemctl --version
	I1204 15:38:03.121181   10206 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 15:38:03.122745   10206 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 15:38:03.122782   10206 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1204 15:38:03.125607   10206 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1204 15:38:03.130143   10206 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 15:38:03.130150   10206 start.go:495] detecting cgroup driver to use...
	I1204 15:38:03.130225   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 15:38:03.137796   10206 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1204 15:38:03.141116   10206 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1204 15:38:03.143837   10206 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1204 15:38:03.143862   10206 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1204 15:38:03.146936   10206 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1204 15:38:03.150199   10206 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1204 15:38:03.153897   10206 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1204 15:38:03.157208   10206 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 15:38:03.159899   10206 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1204 15:38:03.162775   10206 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1204 15:38:03.165871   10206 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1204 15:38:03.168895   10206 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 15:38:03.171445   10206 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 15:38:03.174528   10206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:38:03.254939   10206 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1204 15:38:03.261417   10206 start.go:495] detecting cgroup driver to use...
	I1204 15:38:03.261506   10206 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1204 15:38:03.267244   10206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 15:38:03.272302   10206 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 15:38:03.281836   10206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 15:38:03.286651   10206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1204 15:38:03.291617   10206 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1204 15:38:03.353039   10206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1204 15:38:03.358033   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 15:38:03.363706   10206 ssh_runner.go:195] Run: which cri-dockerd
	I1204 15:38:03.365151   10206 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1204 15:38:03.367620   10206 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1204 15:38:03.372708   10206 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1204 15:38:03.449930   10206 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1204 15:38:03.528168   10206 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1204 15:38:03.528237   10206 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1204 15:38:03.533213   10206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:38:03.610271   10206 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1204 15:38:04.768459   10206 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.158161792s)
	I1204 15:38:04.768528   10206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1204 15:38:04.772758   10206 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1204 15:38:04.777695   10206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1204 15:38:04.782996   10206 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1204 15:38:04.866033   10206 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1204 15:38:04.938167   10206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:38:05.018856   10206 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1204 15:38:05.025207   10206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1204 15:38:05.029437   10206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:38:05.100589   10206 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1204 15:38:05.142282   10206 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1204 15:38:05.142384   10206 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1204 15:38:05.144411   10206 start.go:563] Will wait 60s for crictl version
	I1204 15:38:05.144449   10206 ssh_runner.go:195] Run: which crictl
	I1204 15:38:05.145641   10206 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 15:38:05.161978   10206 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1204 15:38:05.162061   10206 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1204 15:38:05.179898   10206 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1204 15:38:05.203292   10206 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1204 15:38:05.203369   10206 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1204 15:38:05.204766   10206 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 15:38:05.208173   10206 kubeadm.go:883] updating cluster {Name:stopped-upgrade-377000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:61834 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-377000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1204 15:38:05.208216   10206 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1204 15:38:05.208266   10206 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1204 15:38:05.222021   10206 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1204 15:38:05.222028   10206 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1204 15:38:05.222084   10206 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1204 15:38:05.225448   10206 ssh_runner.go:195] Run: which lz4
	I1204 15:38:05.226752   10206 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1204 15:38:05.228167   10206 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1204 15:38:05.228177   10206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1204 15:38:06.198538   10206 docker.go:653] duration metric: took 971.822667ms to copy over tarball
	I1204 15:38:06.198608   10206 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1204 15:38:02.000077    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:38:02.000297    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:38:02.017650    9926 logs.go:282] 2 containers: [9bb28cca02e8 e4f90e1b9024]
	I1204 15:38:02.017765    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:38:02.032023    9926 logs.go:282] 2 containers: [a32a7806b496 ca0c907ad43c]
	I1204 15:38:02.032118    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:38:02.044320    9926 logs.go:282] 1 containers: [08e34a589c88]
	I1204 15:38:02.044398    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:38:02.056787    9926 logs.go:282] 2 containers: [7fbf81020cc8 f18e443b8788]
	I1204 15:38:02.056876    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:38:02.067087    9926 logs.go:282] 1 containers: [5620f978e468]
	I1204 15:38:02.067164    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:38:02.077548    9926 logs.go:282] 2 containers: [d8b20ca47793 e94aa66fc745]
	I1204 15:38:02.077630    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:38:02.093180    9926 logs.go:282] 0 containers: []
	W1204 15:38:02.093190    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:38:02.093268    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:38:02.107905    9926 logs.go:282] 0 containers: []
	W1204 15:38:02.107917    9926 logs.go:284] No container was found matching "storage-provisioner"
	I1204 15:38:02.107928    9926 logs.go:123] Gathering logs for etcd [a32a7806b496] ...
	I1204 15:38:02.107934    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32a7806b496"
	I1204 15:38:02.122013    9926 logs.go:123] Gathering logs for coredns [08e34a589c88] ...
	I1204 15:38:02.122025    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e34a589c88"
	I1204 15:38:02.137661    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:38:02.137674    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:38:02.178811    9926 logs.go:123] Gathering logs for kube-apiserver [9bb28cca02e8] ...
	I1204 15:38:02.178823    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb28cca02e8"
	I1204 15:38:02.196874    9926 logs.go:123] Gathering logs for kube-controller-manager [d8b20ca47793] ...
	I1204 15:38:02.196887    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8b20ca47793"
	I1204 15:38:02.214612    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:38:02.214626    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:38:02.226821    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:38:02.226831    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:38:02.252866    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:38:02.252877    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:38:02.257883    9926 logs.go:123] Gathering logs for kube-apiserver [e4f90e1b9024] ...
	I1204 15:38:02.257893    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4f90e1b9024"
	I1204 15:38:02.277772    9926 logs.go:123] Gathering logs for kube-scheduler [f18e443b8788] ...
	I1204 15:38:02.277786    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f18e443b8788"
	I1204 15:38:02.293260    9926 logs.go:123] Gathering logs for kube-proxy [5620f978e468] ...
	I1204 15:38:02.293272    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5620f978e468"
	I1204 15:38:02.305349    9926 logs.go:123] Gathering logs for kube-controller-manager [e94aa66fc745] ...
	I1204 15:38:02.305362    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94aa66fc745"
	I1204 15:38:02.316972    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:38:02.316986    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:38:02.353527    9926 logs.go:123] Gathering logs for etcd [ca0c907ad43c] ...
	I1204 15:38:02.353538    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca0c907ad43c"
	I1204 15:38:02.372029    9926 logs.go:123] Gathering logs for kube-scheduler [7fbf81020cc8] ...
	I1204 15:38:02.372038    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fbf81020cc8"
	I1204 15:38:04.890685    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:38:07.370329   10206 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.171694792s)
	I1204 15:38:07.370341   10206 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1204 15:38:07.386199   10206 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1204 15:38:07.389817   10206 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1204 15:38:07.395164   10206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:38:07.474623   10206 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1204 15:38:09.078684   10206 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.604013917s)
	I1204 15:38:09.078794   10206 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1204 15:38:09.089819   10206 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1204 15:38:09.089828   10206 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1204 15:38:09.089833   10206 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1204 15:38:09.096418   10206 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 15:38:09.098533   10206 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1204 15:38:09.099958   10206 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1204 15:38:09.100337   10206 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 15:38:09.102029   10206 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1204 15:38:09.102128   10206 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1204 15:38:09.103441   10206 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1204 15:38:09.103705   10206 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1204 15:38:09.104760   10206 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1204 15:38:09.104953   10206 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1204 15:38:09.106038   10206 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1204 15:38:09.106044   10206 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1204 15:38:09.107028   10206 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1204 15:38:09.107369   10206 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1204 15:38:09.108166   10206 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1204 15:38:09.109027   10206 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1204 15:38:09.658616   10206 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1204 15:38:09.666630   10206 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1204 15:38:09.670763   10206 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1204 15:38:09.670797   10206 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1204 15:38:09.670854   10206 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1204 15:38:09.685703   10206 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1204 15:38:09.685734   10206 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1204 15:38:09.685714   10206 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1204 15:38:09.685769   10206 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1204 15:38:09.696704   10206 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1204 15:38:09.710948   10206 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1204 15:38:09.722387   10206 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1204 15:38:09.722444   10206 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1204 15:38:09.722498   10206 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1204 15:38:09.732708   10206 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1204 15:38:09.740151   10206 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1204 15:38:09.753261   10206 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1204 15:38:09.753282   10206 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1204 15:38:09.753352   10206 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1204 15:38:09.763596   10206 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1204 15:38:09.865484   10206 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1204 15:38:09.877867   10206 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1204 15:38:09.884235   10206 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1204 15:38:09.884255   10206 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1204 15:38:09.884319   10206 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1204 15:38:09.897968   10206 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1204 15:38:09.897990   10206 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1204 15:38:09.898056   10206 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1204 15:38:09.907892   10206 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1204 15:38:09.911927   10206 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1204 15:38:09.912086   10206 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1204 15:38:09.913724   10206 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1204 15:38:09.913740   10206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1204 15:38:09.923036   10206 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1204 15:38:09.923051   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1204 15:38:09.952351   10206 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W1204 15:38:09.963172   10206 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1204 15:38:09.963327   10206 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1204 15:38:09.974372   10206 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1204 15:38:09.974394   10206 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1204 15:38:09.974459   10206 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1204 15:38:09.986249   10206 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1204 15:38:09.986397   10206 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1204 15:38:09.988162   10206 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1204 15:38:09.988185   10206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1204 15:38:10.031602   10206 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1204 15:38:10.031626   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	W1204 15:38:10.033847   10206 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1204 15:38:10.034149   10206 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 15:38:10.078489   10206 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1204 15:38:10.078540   10206 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1204 15:38:10.078566   10206 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 15:38:10.078637   10206 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 15:38:10.092999   10206 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1204 15:38:10.093149   10206 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1204 15:38:10.094524   10206 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1204 15:38:10.094536   10206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1204 15:38:10.128264   10206 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1204 15:38:10.128279   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1204 15:38:10.379703   10206 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1204 15:38:10.379735   10206 cache_images.go:92] duration metric: took 1.289883666s to LoadCachedImages
	W1204 15:38:10.379781   10206 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I1204 15:38:10.379788   10206 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1204 15:38:10.379853   10206 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-377000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-377000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 15:38:10.379935   10206 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1204 15:38:10.393880   10206 cni.go:84] Creating CNI manager for ""
	I1204 15:38:10.393896   10206 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 15:38:10.393909   10206 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 15:38:10.393918   10206 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-377000 NodeName:stopped-upgrade-377000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 15:38:10.393992   10206 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-377000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 15:38:10.394078   10206 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1204 15:38:10.397281   10206 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 15:38:10.397320   10206 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 15:38:10.399866   10206 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1204 15:38:10.404728   10206 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 15:38:10.409699   10206 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1204 15:38:10.415103   10206 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1204 15:38:10.416397   10206 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 15:38:10.419758   10206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:38:10.506718   10206 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 15:38:10.516663   10206 certs.go:68] Setting up /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/stopped-upgrade-377000 for IP: 10.0.2.15
	I1204 15:38:10.516672   10206 certs.go:194] generating shared ca certs ...
	I1204 15:38:10.516680   10206 certs.go:226] acquiring lock for ca certs: {Name:mkc3a39b491c90031583eb49eb548c7e4c1f6091 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:38:10.516853   10206 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/20045-6982/.minikube/ca.key
	I1204 15:38:10.516893   10206 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/20045-6982/.minikube/proxy-client-ca.key
	I1204 15:38:10.516899   10206 certs.go:256] generating profile certs ...
	I1204 15:38:10.516960   10206 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/stopped-upgrade-377000/client.key
	I1204 15:38:10.516981   10206 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/stopped-upgrade-377000/apiserver.key.4b76073c
	I1204 15:38:10.516993   10206 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/stopped-upgrade-377000/apiserver.crt.4b76073c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1204 15:38:10.726470   10206 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/stopped-upgrade-377000/apiserver.crt.4b76073c ...
	I1204 15:38:10.726487   10206 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/stopped-upgrade-377000/apiserver.crt.4b76073c: {Name:mk84cbc2c89a4a537c79a32039bed9e1b6cb0cda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:38:10.726901   10206 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/stopped-upgrade-377000/apiserver.key.4b76073c ...
	I1204 15:38:10.726906   10206 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/stopped-upgrade-377000/apiserver.key.4b76073c: {Name:mk0b5c8865ca5f079bc764078e2a2d884bfbc5b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:38:10.727078   10206 certs.go:381] copying /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/stopped-upgrade-377000/apiserver.crt.4b76073c -> /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/stopped-upgrade-377000/apiserver.crt
	I1204 15:38:10.727732   10206 certs.go:385] copying /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/stopped-upgrade-377000/apiserver.key.4b76073c -> /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/stopped-upgrade-377000/apiserver.key
	I1204 15:38:10.727913   10206 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/stopped-upgrade-377000/proxy-client.key
	I1204 15:38:10.728070   10206 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/7495.pem (1338 bytes)
	W1204 15:38:10.728099   10206 certs.go:480] ignoring /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/7495_empty.pem, impossibly tiny 0 bytes
	I1204 15:38:10.728105   10206 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca-key.pem (1675 bytes)
	I1204 15:38:10.728130   10206 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem (1078 bytes)
	I1204 15:38:10.728150   10206 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem (1123 bytes)
	I1204 15:38:10.728172   10206 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/key.pem (1679 bytes)
	I1204 15:38:10.728213   10206 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-6982/.minikube/files/etc/ssl/certs/74952.pem (1708 bytes)
	I1204 15:38:10.728587   10206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 15:38:10.735833   10206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1204 15:38:10.743016   10206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 15:38:10.750481   10206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 15:38:10.757683   10206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/stopped-upgrade-377000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1204 15:38:10.764799   10206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/stopped-upgrade-377000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1204 15:38:10.771592   10206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/stopped-upgrade-377000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 15:38:10.778621   10206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/stopped-upgrade-377000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 15:38:10.786255   10206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 15:38:10.793483   10206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/7495.pem --> /usr/share/ca-certificates/7495.pem (1338 bytes)
	I1204 15:38:10.800236   10206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/files/etc/ssl/certs/74952.pem --> /usr/share/ca-certificates/74952.pem (1708 bytes)
	I1204 15:38:10.806958   10206 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 15:38:10.812452   10206 ssh_runner.go:195] Run: openssl version
	I1204 15:38:10.814357   10206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 15:38:10.817787   10206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:38:10.819121   10206 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:38:10.819161   10206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:38:10.820847   10206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 15:38:10.823738   10206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7495.pem && ln -fs /usr/share/ca-certificates/7495.pem /etc/ssl/certs/7495.pem"
	I1204 15:38:10.827064   10206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7495.pem
	I1204 15:38:10.828648   10206 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 23:22 /usr/share/ca-certificates/7495.pem
	I1204 15:38:10.828680   10206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7495.pem
	I1204 15:38:10.830296   10206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7495.pem /etc/ssl/certs/51391683.0"
	I1204 15:38:10.833462   10206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/74952.pem && ln -fs /usr/share/ca-certificates/74952.pem /etc/ssl/certs/74952.pem"
	I1204 15:38:10.836256   10206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/74952.pem
	I1204 15:38:10.837691   10206 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 23:22 /usr/share/ca-certificates/74952.pem
	I1204 15:38:10.837716   10206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/74952.pem
	I1204 15:38:10.839443   10206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/74952.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 15:38:10.842762   10206 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 15:38:10.844242   10206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 15:38:10.846977   10206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 15:38:10.848938   10206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 15:38:10.850890   10206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 15:38:10.852881   10206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 15:38:10.854637   10206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 15:38:10.856464   10206 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-377000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:61834 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-377000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1204 15:38:10.856535   10206 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1204 15:38:10.867059   10206 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 15:38:10.870966   10206 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1204 15:38:10.870976   10206 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1204 15:38:10.871007   10206 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1204 15:38:10.873955   10206 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1204 15:38:10.874265   10206 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-377000" does not appear in /Users/jenkins/minikube-integration/20045-6982/kubeconfig
	I1204 15:38:10.874363   10206 kubeconfig.go:62] /Users/jenkins/minikube-integration/20045-6982/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-377000" cluster setting kubeconfig missing "stopped-upgrade-377000" context setting]
	I1204 15:38:10.874578   10206 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-6982/kubeconfig: {Name:mk101d59bd39dad79cc42c692d70ed55e90c94da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:38:10.875024   10206 kapi.go:59] client config for stopped-upgrade-377000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/stopped-upgrade-377000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/stopped-upgrade-377000/client.key", CAFile:"/Users/jenkins/minikube-integration/20045-6982/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10435f6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1204 15:38:10.875373   10206 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1204 15:38:10.878139   10206 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-377000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1204 15:38:10.878144   10206 kubeadm.go:1160] stopping kube-system containers ...
	I1204 15:38:10.878191   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1204 15:38:10.888489   10206 docker.go:483] Stopping containers: [4f1790594676 63473edefa8f 1d1ae4543cd6 9b5d6b3a7511 b0ad1b935d01 7e96315d0637 93b18643529f 931f0e7873ab]
	I1204 15:38:10.888567   10206 ssh_runner.go:195] Run: docker stop 4f1790594676 63473edefa8f 1d1ae4543cd6 9b5d6b3a7511 b0ad1b935d01 7e96315d0637 93b18643529f 931f0e7873ab
	I1204 15:38:10.899263   10206 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1204 15:38:10.905195   10206 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 15:38:10.907895   10206 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 15:38:10.907901   10206 kubeadm.go:157] found existing configuration files:
	
	I1204 15:38:10.907929   10206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61834 /etc/kubernetes/admin.conf
	I1204 15:38:10.910675   10206 kubeadm.go:163] "https://control-plane.minikube.internal:61834" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:61834 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 15:38:10.910710   10206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 15:38:10.913756   10206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61834 /etc/kubernetes/kubelet.conf
	I1204 15:38:10.916322   10206 kubeadm.go:163] "https://control-plane.minikube.internal:61834" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:61834 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 15:38:10.916354   10206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 15:38:10.918961   10206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61834 /etc/kubernetes/controller-manager.conf
	I1204 15:38:10.921958   10206 kubeadm.go:163] "https://control-plane.minikube.internal:61834" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:61834 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 15:38:10.921988   10206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 15:38:10.924813   10206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61834 /etc/kubernetes/scheduler.conf
	I1204 15:38:10.927203   10206 kubeadm.go:163] "https://control-plane.minikube.internal:61834" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:61834 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 15:38:10.927232   10206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 15:38:10.930193   10206 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 15:38:10.933341   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 15:38:10.955661   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 15:38:11.743883   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1204 15:38:09.892897    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:38:09.892995    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:38:09.904983    9926 logs.go:282] 2 containers: [9bb28cca02e8 e4f90e1b9024]
	I1204 15:38:09.905071    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:38:09.917311    9926 logs.go:282] 2 containers: [a32a7806b496 ca0c907ad43c]
	I1204 15:38:09.917385    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:38:09.928895    9926 logs.go:282] 1 containers: [08e34a589c88]
	I1204 15:38:09.928981    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:38:09.944510    9926 logs.go:282] 2 containers: [7fbf81020cc8 f18e443b8788]
	I1204 15:38:09.944596    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:38:09.956160    9926 logs.go:282] 1 containers: [5620f978e468]
	I1204 15:38:09.956238    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:38:09.967318    9926 logs.go:282] 2 containers: [d8b20ca47793 e94aa66fc745]
	I1204 15:38:09.967401    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:38:09.979867    9926 logs.go:282] 0 containers: []
	W1204 15:38:09.979879    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:38:09.979950    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:38:09.991850    9926 logs.go:282] 0 containers: []
	W1204 15:38:09.991864    9926 logs.go:284] No container was found matching "storage-provisioner"
	I1204 15:38:09.991873    9926 logs.go:123] Gathering logs for coredns [08e34a589c88] ...
	I1204 15:38:09.991882    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e34a589c88"
	I1204 15:38:10.009156    9926 logs.go:123] Gathering logs for etcd [ca0c907ad43c] ...
	I1204 15:38:10.009169    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca0c907ad43c"
	I1204 15:38:10.029959    9926 logs.go:123] Gathering logs for kube-scheduler [f18e443b8788] ...
	I1204 15:38:10.029973    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f18e443b8788"
	I1204 15:38:10.047100    9926 logs.go:123] Gathering logs for kube-controller-manager [d8b20ca47793] ...
	I1204 15:38:10.047117    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8b20ca47793"
	I1204 15:38:10.069421    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:38:10.069449    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:38:10.095730    9926 logs.go:123] Gathering logs for kube-scheduler [7fbf81020cc8] ...
	I1204 15:38:10.095742    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fbf81020cc8"
	I1204 15:38:10.122778    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:38:10.122790    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:38:10.160729    9926 logs.go:123] Gathering logs for kube-apiserver [9bb28cca02e8] ...
	I1204 15:38:10.160744    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb28cca02e8"
	I1204 15:38:10.175804    9926 logs.go:123] Gathering logs for kube-apiserver [e4f90e1b9024] ...
	I1204 15:38:10.175818    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4f90e1b9024"
	I1204 15:38:10.196162    9926 logs.go:123] Gathering logs for etcd [a32a7806b496] ...
	I1204 15:38:10.196180    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32a7806b496"
	I1204 15:38:10.215182    9926 logs.go:123] Gathering logs for kube-controller-manager [e94aa66fc745] ...
	I1204 15:38:10.215195    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94aa66fc745"
	I1204 15:38:10.227770    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:38:10.227782    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:38:10.240311    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:38:10.240324    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:38:10.245024    9926 logs.go:123] Gathering logs for kube-proxy [5620f978e468] ...
	I1204 15:38:10.245032    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5620f978e468"
	I1204 15:38:10.257140    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:38:10.257154    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:38:11.877024   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 15:38:11.897710   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1204 15:38:11.924404   10206 api_server.go:52] waiting for apiserver process to appear ...
	I1204 15:38:11.924502   10206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 15:38:12.426570   10206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 15:38:12.926591   10206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 15:38:12.930746   10206 api_server.go:72] duration metric: took 1.006335417s to wait for apiserver process to appear ...
	I1204 15:38:12.930760   10206 api_server.go:88] waiting for apiserver healthz status ...
	I1204 15:38:12.930776   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:38:12.798558    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:38:17.932872   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:38:17.932889   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:38:17.800841    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:38:17.801009    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:38:17.812441    9926 logs.go:282] 2 containers: [9bb28cca02e8 e4f90e1b9024]
	I1204 15:38:17.812524    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:38:17.823432    9926 logs.go:282] 2 containers: [a32a7806b496 ca0c907ad43c]
	I1204 15:38:17.823515    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:38:17.837025    9926 logs.go:282] 1 containers: [08e34a589c88]
	I1204 15:38:17.837102    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:38:17.847625    9926 logs.go:282] 2 containers: [7fbf81020cc8 f18e443b8788]
	I1204 15:38:17.847731    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:38:17.858797    9926 logs.go:282] 1 containers: [5620f978e468]
	I1204 15:38:17.858878    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:38:17.869216    9926 logs.go:282] 2 containers: [d8b20ca47793 e94aa66fc745]
	I1204 15:38:17.869293    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:38:17.879761    9926 logs.go:282] 0 containers: []
	W1204 15:38:17.879774    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:38:17.879837    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:38:17.889944    9926 logs.go:282] 0 containers: []
	W1204 15:38:17.889955    9926 logs.go:284] No container was found matching "storage-provisioner"
	I1204 15:38:17.889964    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:38:17.889969    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:38:17.914724    9926 logs.go:123] Gathering logs for kube-apiserver [9bb28cca02e8] ...
	I1204 15:38:17.914737    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb28cca02e8"
	I1204 15:38:17.929386    9926 logs.go:123] Gathering logs for etcd [ca0c907ad43c] ...
	I1204 15:38:17.929399    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca0c907ad43c"
	I1204 15:38:17.947113    9926 logs.go:123] Gathering logs for coredns [08e34a589c88] ...
	I1204 15:38:17.947123    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e34a589c88"
	I1204 15:38:17.959770    9926 logs.go:123] Gathering logs for kube-scheduler [7fbf81020cc8] ...
	I1204 15:38:17.959783    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fbf81020cc8"
	I1204 15:38:17.976745    9926 logs.go:123] Gathering logs for kube-scheduler [f18e443b8788] ...
	I1204 15:38:17.976758    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f18e443b8788"
	I1204 15:38:17.992529    9926 logs.go:123] Gathering logs for kube-proxy [5620f978e468] ...
	I1204 15:38:17.992540    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5620f978e468"
	I1204 15:38:18.004263    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:38:18.004276    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:38:18.016468    9926 logs.go:123] Gathering logs for kube-controller-manager [e94aa66fc745] ...
	I1204 15:38:18.016482    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94aa66fc745"
	I1204 15:38:18.028137    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:38:18.028149    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:38:18.066337    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:38:18.066353    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:38:18.102110    9926 logs.go:123] Gathering logs for kube-apiserver [e4f90e1b9024] ...
	I1204 15:38:18.102124    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4f90e1b9024"
	I1204 15:38:18.122356    9926 logs.go:123] Gathering logs for etcd [a32a7806b496] ...
	I1204 15:38:18.122367    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32a7806b496"
	I1204 15:38:18.135858    9926 logs.go:123] Gathering logs for kube-controller-manager [d8b20ca47793] ...
	I1204 15:38:18.135870    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8b20ca47793"
	I1204 15:38:18.153560    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:38:18.153572    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:38:20.659212    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:38:22.933178   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:38:22.933228   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:38:25.661291    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:38:25.661494    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:38:25.673666    9926 logs.go:282] 2 containers: [9bb28cca02e8 e4f90e1b9024]
	I1204 15:38:25.673755    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:38:25.684621    9926 logs.go:282] 2 containers: [a32a7806b496 ca0c907ad43c]
	I1204 15:38:25.684702    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:38:25.695249    9926 logs.go:282] 1 containers: [08e34a589c88]
	I1204 15:38:25.695329    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:38:25.706455    9926 logs.go:282] 2 containers: [7fbf81020cc8 f18e443b8788]
	I1204 15:38:25.706539    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:38:25.717360    9926 logs.go:282] 1 containers: [5620f978e468]
	I1204 15:38:25.717431    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:38:25.728251    9926 logs.go:282] 2 containers: [d8b20ca47793 e94aa66fc745]
	I1204 15:38:25.728330    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:38:25.738413    9926 logs.go:282] 0 containers: []
	W1204 15:38:25.738428    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:38:25.738489    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:38:25.756547    9926 logs.go:282] 0 containers: []
	W1204 15:38:25.756560    9926 logs.go:284] No container was found matching "storage-provisioner"
	I1204 15:38:25.756568    9926 logs.go:123] Gathering logs for etcd [a32a7806b496] ...
	I1204 15:38:25.756574    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32a7806b496"
	I1204 15:38:25.775622    9926 logs.go:123] Gathering logs for coredns [08e34a589c88] ...
	I1204 15:38:25.775633    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e34a589c88"
	I1204 15:38:25.787104    9926 logs.go:123] Gathering logs for kube-apiserver [e4f90e1b9024] ...
	I1204 15:38:25.787115    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4f90e1b9024"
	I1204 15:38:25.805791    9926 logs.go:123] Gathering logs for kube-controller-manager [e94aa66fc745] ...
	I1204 15:38:25.805802    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94aa66fc745"
	I1204 15:38:25.818494    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:38:25.818504    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:38:25.830218    9926 logs.go:123] Gathering logs for kube-scheduler [7fbf81020cc8] ...
	I1204 15:38:25.830230    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fbf81020cc8"
	I1204 15:38:25.846883    9926 logs.go:123] Gathering logs for kube-apiserver [9bb28cca02e8] ...
	I1204 15:38:25.846892    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb28cca02e8"
	I1204 15:38:25.870709    9926 logs.go:123] Gathering logs for kube-scheduler [f18e443b8788] ...
	I1204 15:38:25.870720    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f18e443b8788"
	I1204 15:38:25.886138    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:38:25.886152    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:38:25.910960    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:38:25.910971    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:38:25.915451    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:38:25.915461    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:38:25.950504    9926 logs.go:123] Gathering logs for etcd [ca0c907ad43c] ...
	I1204 15:38:25.950515    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca0c907ad43c"
	I1204 15:38:25.968781    9926 logs.go:123] Gathering logs for kube-proxy [5620f978e468] ...
	I1204 15:38:25.968794    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5620f978e468"
	I1204 15:38:25.982012    9926 logs.go:123] Gathering logs for kube-controller-manager [d8b20ca47793] ...
	I1204 15:38:25.982026    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8b20ca47793"
	I1204 15:38:25.999244    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:38:25.999264    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:38:27.933685   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:38:27.933709   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:38:28.541807    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:38:32.934191   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:38:32.934243   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:38:33.542981    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:38:33.543164    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:38:33.563389    9926 logs.go:282] 2 containers: [9bb28cca02e8 e4f90e1b9024]
	I1204 15:38:33.563463    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:38:33.575576    9926 logs.go:282] 2 containers: [a32a7806b496 ca0c907ad43c]
	I1204 15:38:33.575657    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:38:33.586171    9926 logs.go:282] 1 containers: [08e34a589c88]
	I1204 15:38:33.586245    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:38:33.596601    9926 logs.go:282] 2 containers: [7fbf81020cc8 f18e443b8788]
	I1204 15:38:33.596677    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:38:33.610316    9926 logs.go:282] 1 containers: [5620f978e468]
	I1204 15:38:33.610389    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:38:33.621059    9926 logs.go:282] 2 containers: [d8b20ca47793 e94aa66fc745]
	I1204 15:38:33.621147    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:38:33.641115    9926 logs.go:282] 0 containers: []
	W1204 15:38:33.641128    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:38:33.641206    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:38:33.651224    9926 logs.go:282] 0 containers: []
	W1204 15:38:33.651235    9926 logs.go:284] No container was found matching "storage-provisioner"
	I1204 15:38:33.651243    9926 logs.go:123] Gathering logs for kube-apiserver [e4f90e1b9024] ...
	I1204 15:38:33.651249    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4f90e1b9024"
	I1204 15:38:33.670384    9926 logs.go:123] Gathering logs for etcd [a32a7806b496] ...
	I1204 15:38:33.670394    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a32a7806b496"
	I1204 15:38:33.684169    9926 logs.go:123] Gathering logs for kube-scheduler [7fbf81020cc8] ...
	I1204 15:38:33.684181    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fbf81020cc8"
	I1204 15:38:33.703134    9926 logs.go:123] Gathering logs for kube-controller-manager [d8b20ca47793] ...
	I1204 15:38:33.703147    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8b20ca47793"
	I1204 15:38:33.720889    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:38:33.720902    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:38:33.760380    9926 logs.go:123] Gathering logs for kube-proxy [5620f978e468] ...
	I1204 15:38:33.760390    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5620f978e468"
	I1204 15:38:33.772639    9926 logs.go:123] Gathering logs for kube-apiserver [9bb28cca02e8] ...
	I1204 15:38:33.772651    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bb28cca02e8"
	I1204 15:38:33.786833    9926 logs.go:123] Gathering logs for coredns [08e34a589c88] ...
	I1204 15:38:33.786846    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 08e34a589c88"
	I1204 15:38:33.798573    9926 logs.go:123] Gathering logs for kube-scheduler [f18e443b8788] ...
	I1204 15:38:33.798585    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f18e443b8788"
	I1204 15:38:33.815247    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:38:33.815258    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:38:33.827372    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:38:33.827382    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:38:33.865115    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:38:33.865124    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:38:33.869437    9926 logs.go:123] Gathering logs for etcd [ca0c907ad43c] ...
	I1204 15:38:33.869445    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca0c907ad43c"
	I1204 15:38:33.887049    9926 logs.go:123] Gathering logs for kube-controller-manager [e94aa66fc745] ...
	I1204 15:38:33.887061    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e94aa66fc745"
	I1204 15:38:33.905197    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:38:33.905211    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:38:36.430394    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:38:37.935055   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:38:37.935101   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:38:41.432838    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:38:41.433033    9926 kubeadm.go:597] duration metric: took 4m4.371478667s to restartPrimaryControlPlane
	W1204 15:38:41.433170    9926 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1204 15:38:41.433226    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1204 15:38:42.357969    9926 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 15:38:42.363261    9926 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 15:38:42.367033    9926 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 15:38:42.369965    9926 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 15:38:42.369970    9926 kubeadm.go:157] found existing configuration files:
	
	I1204 15:38:42.369997    9926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61592 /etc/kubernetes/admin.conf
	I1204 15:38:42.372567    9926 kubeadm.go:163] "https://control-plane.minikube.internal:61592" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:61592 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 15:38:42.372611    9926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 15:38:42.375317    9926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61592 /etc/kubernetes/kubelet.conf
	I1204 15:38:42.378189    9926 kubeadm.go:163] "https://control-plane.minikube.internal:61592" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:61592 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 15:38:42.378226    9926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 15:38:42.380875    9926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61592 /etc/kubernetes/controller-manager.conf
	I1204 15:38:42.383156    9926 kubeadm.go:163] "https://control-plane.minikube.internal:61592" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:61592 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 15:38:42.383190    9926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 15:38:42.386311    9926 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61592 /etc/kubernetes/scheduler.conf
	I1204 15:38:42.388968    9926 kubeadm.go:163] "https://control-plane.minikube.internal:61592" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:61592 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 15:38:42.388997    9926 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 15:38:42.391475    9926 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 15:38:42.408681    9926 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1204 15:38:42.408715    9926 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 15:38:42.455281    9926 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 15:38:42.455334    9926 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 15:38:42.455382    9926 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1204 15:38:42.505833    9926 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 15:38:42.509798    9926 out.go:235]   - Generating certificates and keys ...
	I1204 15:38:42.509834    9926 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 15:38:42.509863    9926 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 15:38:42.509908    9926 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1204 15:38:42.509943    9926 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1204 15:38:42.509977    9926 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1204 15:38:42.510008    9926 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1204 15:38:42.510040    9926 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1204 15:38:42.510076    9926 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1204 15:38:42.510115    9926 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1204 15:38:42.510158    9926 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1204 15:38:42.510178    9926 kubeadm.go:310] [certs] Using the existing "sa" key
	I1204 15:38:42.510214    9926 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 15:38:42.605058    9926 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 15:38:42.694610    9926 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 15:38:42.781829    9926 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 15:38:42.848324    9926 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 15:38:42.878483    9926 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 15:38:42.880367    9926 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 15:38:42.880392    9926 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 15:38:42.944112    9926 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 15:38:42.936012   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:38:42.936038   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:38:42.947034    9926 out.go:235]   - Booting up control plane ...
	I1204 15:38:42.947083    9926 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 15:38:42.947130    9926 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 15:38:42.947200    9926 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 15:38:42.947246    9926 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 15:38:42.947319    9926 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1204 15:38:47.447031    9926 kubeadm.go:310] [apiclient] All control plane components are healthy after 4.503116 seconds
	I1204 15:38:47.447128    9926 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1204 15:38:47.452125    9926 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1204 15:38:47.961886    9926 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1204 15:38:47.962146    9926 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-084000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1204 15:38:48.466472    9926 kubeadm.go:310] [bootstrap-token] Using token: pkltab.sskucs47s1362brc
	I1204 15:38:48.470663    9926 out.go:235]   - Configuring RBAC rules ...
	I1204 15:38:48.470714    9926 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1204 15:38:48.470766    9926 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1204 15:38:48.477696    9926 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1204 15:38:48.478508    9926 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1204 15:38:48.479422    9926 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1204 15:38:48.480086    9926 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1204 15:38:48.483391    9926 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1204 15:38:48.636571    9926 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1204 15:38:48.870866    9926 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1204 15:38:48.871292    9926 kubeadm.go:310] 
	I1204 15:38:48.871332    9926 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1204 15:38:48.871338    9926 kubeadm.go:310] 
	I1204 15:38:48.871380    9926 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1204 15:38:48.871384    9926 kubeadm.go:310] 
	I1204 15:38:48.871397    9926 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1204 15:38:48.871431    9926 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1204 15:38:48.871498    9926 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1204 15:38:48.871502    9926 kubeadm.go:310] 
	I1204 15:38:48.871538    9926 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1204 15:38:48.871542    9926 kubeadm.go:310] 
	I1204 15:38:48.871564    9926 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1204 15:38:48.871579    9926 kubeadm.go:310] 
	I1204 15:38:48.871610    9926 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1204 15:38:48.871648    9926 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1204 15:38:48.871704    9926 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1204 15:38:48.871712    9926 kubeadm.go:310] 
	I1204 15:38:48.871761    9926 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1204 15:38:48.871806    9926 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1204 15:38:48.871809    9926 kubeadm.go:310] 
	I1204 15:38:48.871857    9926 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token pkltab.sskucs47s1362brc \
	I1204 15:38:48.871915    9926 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9ed783fc6ac587ac5303da44420d8c41896e6ac9083929196f4ee227216cf3a5 \
	I1204 15:38:48.871927    9926 kubeadm.go:310] 	--control-plane 
	I1204 15:38:48.871930    9926 kubeadm.go:310] 
	I1204 15:38:48.871972    9926 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1204 15:38:48.871975    9926 kubeadm.go:310] 
	I1204 15:38:48.872026    9926 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token pkltab.sskucs47s1362brc \
	I1204 15:38:48.872087    9926 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9ed783fc6ac587ac5303da44420d8c41896e6ac9083929196f4ee227216cf3a5 
	I1204 15:38:48.872160    9926 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 15:38:48.872166    9926 cni.go:84] Creating CNI manager for ""
	I1204 15:38:48.872177    9926 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 15:38:48.875803    9926 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 15:38:48.883747    9926 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 15:38:48.887194    9926 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 15:38:48.892615    9926 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 15:38:48.892695    9926 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 15:38:48.892701    9926 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-084000 minikube.k8s.io/updated_at=2024_12_04T15_38_48_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d minikube.k8s.io/name=running-upgrade-084000 minikube.k8s.io/primary=true
	I1204 15:38:48.895995    9926 ops.go:34] apiserver oom_adj: -16
	I1204 15:38:48.927663    9926 kubeadm.go:1113] duration metric: took 35.013ms to wait for elevateKubeSystemPrivileges
	I1204 15:38:48.938440    9926 kubeadm.go:394] duration metric: took 4m11.891651875s to StartCluster
	I1204 15:38:48.938467    9926 settings.go:142] acquiring lock: {Name:mkdd110867a4c47f742f3f13d7f418d838150f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:38:48.938656    9926 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/20045-6982/kubeconfig
	I1204 15:38:48.939077    9926 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-6982/kubeconfig: {Name:mk101d59bd39dad79cc42c692d70ed55e90c94da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:38:48.939260    9926 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:38:48.939296    9926 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1204 15:38:48.939335    9926 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-084000"
	I1204 15:38:48.939346    9926 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-084000"
	W1204 15:38:48.939349    9926 addons.go:243] addon storage-provisioner should already be in state true
	I1204 15:38:48.939364    9926 host.go:66] Checking if "running-upgrade-084000" exists ...
	I1204 15:38:48.939377    9926 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-084000"
	I1204 15:38:48.939401    9926 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-084000"
	I1204 15:38:48.939454    9926 config.go:182] Loaded profile config "running-upgrade-084000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1204 15:38:48.940564    9926 kapi.go:59] client config for running-upgrade-084000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/running-upgrade-084000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/running-upgrade-084000/client.key", CAFile:"/Users/jenkins/minikube-integration/20045-6982/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10676f6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1204 15:38:48.940891    9926 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-084000"
	W1204 15:38:48.940896    9926 addons.go:243] addon default-storageclass should already be in state true
	I1204 15:38:48.940903    9926 host.go:66] Checking if "running-upgrade-084000" exists ...
	I1204 15:38:48.943772    9926 out.go:177] * Verifying Kubernetes components...
	I1204 15:38:48.944140    9926 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1204 15:38:48.947782    9926 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1204 15:38:48.947794    9926 sshutil.go:53] new ssh client: &{IP:localhost Port:61560 SSHKeyPath:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/running-upgrade-084000/id_rsa Username:docker}
	I1204 15:38:48.951662    9926 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 15:38:47.937523   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:38:47.937573   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:38:48.954743    9926 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:38:48.957699    9926 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 15:38:48.957707    9926 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1204 15:38:48.957712    9926 sshutil.go:53] new ssh client: &{IP:localhost Port:61560 SSHKeyPath:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/running-upgrade-084000/id_rsa Username:docker}
	I1204 15:38:49.027545    9926 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 15:38:49.032710    9926 api_server.go:52] waiting for apiserver process to appear ...
	I1204 15:38:49.032758    9926 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 15:38:49.036438    9926 api_server.go:72] duration metric: took 97.16625ms to wait for apiserver process to appear ...
	I1204 15:38:49.036446    9926 api_server.go:88] waiting for apiserver healthz status ...
	I1204 15:38:49.036453    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:38:49.051995    9926 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 15:38:49.088724    9926 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1204 15:38:49.435088    9926 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1204 15:38:49.435100    9926 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1204 15:38:52.939106   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:38:52.939186   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:38:54.038683    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:38:54.038785    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:38:57.941427   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:38:57.941468   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:38:59.039449    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:38:59.039492    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:39:02.943781   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:39:02.943800   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:39:04.039990    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:39:04.040014    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:39:07.944086   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:39:07.944131   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:39:09.040673    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:39:09.040729    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:39:12.946468   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:39:12.946606   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:39:12.957822   10206 logs.go:282] 2 containers: [bc235f7c7828 1d1ae4543cd6]
	I1204 15:39:12.957897   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:39:12.968095   10206 logs.go:282] 2 containers: [fc48923b85fc 4f1790594676]
	I1204 15:39:12.968171   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:39:12.978698   10206 logs.go:282] 1 containers: [467f47d9f689]
	I1204 15:39:12.978773   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:39:12.995946   10206 logs.go:282] 2 containers: [1f92512b0d23 b0ad1b935d01]
	I1204 15:39:12.996022   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:39:13.006533   10206 logs.go:282] 1 containers: [e290e266aab8]
	I1204 15:39:13.006612   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:39:13.017101   10206 logs.go:282] 2 containers: [4469186baff0 63473edefa8f]
	I1204 15:39:13.017178   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:39:13.027229   10206 logs.go:282] 0 containers: []
	W1204 15:39:13.027240   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:39:13.027303   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:39:13.038194   10206 logs.go:282] 2 containers: [4bf3c6abead9 4a54859eb1e5]
	I1204 15:39:13.038214   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:39:13.038220   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:39:13.062220   10206 logs.go:123] Gathering logs for kube-apiserver [1d1ae4543cd6] ...
	I1204 15:39:13.062227   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d1ae4543cd6"
	I1204 15:39:13.104561   10206 logs.go:123] Gathering logs for etcd [fc48923b85fc] ...
	I1204 15:39:13.104578   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc48923b85fc"
	I1204 15:39:13.119469   10206 logs.go:123] Gathering logs for etcd [4f1790594676] ...
	I1204 15:39:13.119482   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f1790594676"
	I1204 15:39:13.137942   10206 logs.go:123] Gathering logs for storage-provisioner [4bf3c6abead9] ...
	I1204 15:39:13.137954   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bf3c6abead9"
	I1204 15:39:13.149289   10206 logs.go:123] Gathering logs for storage-provisioner [4a54859eb1e5] ...
	I1204 15:39:13.149299   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a54859eb1e5"
	I1204 15:39:13.160461   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:39:13.160474   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:39:13.172856   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:39:13.172866   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:39:13.212457   10206 logs.go:123] Gathering logs for coredns [467f47d9f689] ...
	I1204 15:39:13.212472   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467f47d9f689"
	I1204 15:39:13.226665   10206 logs.go:123] Gathering logs for kube-scheduler [b0ad1b935d01] ...
	I1204 15:39:13.226678   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0ad1b935d01"
	I1204 15:39:13.243888   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:39:13.243898   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:39:13.248428   10206 logs.go:123] Gathering logs for kube-scheduler [1f92512b0d23] ...
	I1204 15:39:13.248434   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f92512b0d23"
	I1204 15:39:13.260493   10206 logs.go:123] Gathering logs for kube-controller-manager [63473edefa8f] ...
	I1204 15:39:13.260505   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63473edefa8f"
	I1204 15:39:13.273014   10206 logs.go:123] Gathering logs for kube-controller-manager [4469186baff0] ...
	I1204 15:39:13.273025   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4469186baff0"
	I1204 15:39:13.290465   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:39:13.290475   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:39:13.379546   10206 logs.go:123] Gathering logs for kube-apiserver [bc235f7c7828] ...
	I1204 15:39:13.379560   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc235f7c7828"
	I1204 15:39:13.393814   10206 logs.go:123] Gathering logs for kube-proxy [e290e266aab8] ...
	I1204 15:39:13.393824   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e290e266aab8"
	I1204 15:39:15.907759   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:39:14.041589    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:39:14.041616    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:39:19.042990    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:39:19.043031    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1204 15:39:19.435880    9926 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1204 15:39:19.441201    9926 out.go:177] * Enabled addons: storage-provisioner
	I1204 15:39:20.910122   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:39:20.910308   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:39:20.926432   10206 logs.go:282] 2 containers: [bc235f7c7828 1d1ae4543cd6]
	I1204 15:39:20.926532   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:39:20.939165   10206 logs.go:282] 2 containers: [fc48923b85fc 4f1790594676]
	I1204 15:39:20.939252   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:39:20.950183   10206 logs.go:282] 1 containers: [467f47d9f689]
	I1204 15:39:20.950255   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:39:20.961160   10206 logs.go:282] 2 containers: [1f92512b0d23 b0ad1b935d01]
	I1204 15:39:20.961239   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:39:20.971968   10206 logs.go:282] 1 containers: [e290e266aab8]
	I1204 15:39:20.972051   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:39:20.983312   10206 logs.go:282] 2 containers: [4469186baff0 63473edefa8f]
	I1204 15:39:20.983395   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:39:20.994716   10206 logs.go:282] 0 containers: []
	W1204 15:39:20.994726   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:39:20.994801   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:39:21.005186   10206 logs.go:282] 2 containers: [4bf3c6abead9 4a54859eb1e5]
	I1204 15:39:21.005205   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:39:21.005211   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:39:21.042226   10206 logs.go:123] Gathering logs for kube-apiserver [1d1ae4543cd6] ...
	I1204 15:39:21.042237   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d1ae4543cd6"
	I1204 15:39:21.080701   10206 logs.go:123] Gathering logs for etcd [4f1790594676] ...
	I1204 15:39:21.080715   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f1790594676"
	I1204 15:39:21.096028   10206 logs.go:123] Gathering logs for kube-proxy [e290e266aab8] ...
	I1204 15:39:21.096040   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e290e266aab8"
	I1204 15:39:21.107958   10206 logs.go:123] Gathering logs for kube-apiserver [bc235f7c7828] ...
	I1204 15:39:21.107969   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc235f7c7828"
	I1204 15:39:21.122372   10206 logs.go:123] Gathering logs for etcd [fc48923b85fc] ...
	I1204 15:39:21.122382   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc48923b85fc"
	I1204 15:39:21.136589   10206 logs.go:123] Gathering logs for kube-scheduler [1f92512b0d23] ...
	I1204 15:39:21.136599   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f92512b0d23"
	I1204 15:39:21.148319   10206 logs.go:123] Gathering logs for kube-controller-manager [4469186baff0] ...
	I1204 15:39:21.148330   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4469186baff0"
	I1204 15:39:21.166149   10206 logs.go:123] Gathering logs for storage-provisioner [4bf3c6abead9] ...
	I1204 15:39:21.166160   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bf3c6abead9"
	I1204 15:39:21.177453   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:39:21.177463   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:39:21.201477   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:39:21.201487   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:39:21.205337   10206 logs.go:123] Gathering logs for coredns [467f47d9f689] ...
	I1204 15:39:21.205343   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467f47d9f689"
	I1204 15:39:21.216802   10206 logs.go:123] Gathering logs for kube-scheduler [b0ad1b935d01] ...
	I1204 15:39:21.216813   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0ad1b935d01"
	I1204 15:39:21.231595   10206 logs.go:123] Gathering logs for storage-provisioner [4a54859eb1e5] ...
	I1204 15:39:21.231606   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a54859eb1e5"
	I1204 15:39:21.242630   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:39:21.242640   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:39:21.254957   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:39:21.254971   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:39:21.294164   10206 logs.go:123] Gathering logs for kube-controller-manager [63473edefa8f] ...
	I1204 15:39:21.294175   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63473edefa8f"
	I1204 15:39:19.448150    9926 addons.go:510] duration metric: took 30.508583958s for enable addons: enabled=[storage-provisioner]
	I1204 15:39:23.809518   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:39:24.043476    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:39:24.043523    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:39:28.811905   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:39:28.812079   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:39:28.823534   10206 logs.go:282] 2 containers: [bc235f7c7828 1d1ae4543cd6]
	I1204 15:39:28.823613   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:39:28.834203   10206 logs.go:282] 2 containers: [fc48923b85fc 4f1790594676]
	I1204 15:39:28.834278   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:39:28.848874   10206 logs.go:282] 1 containers: [467f47d9f689]
	I1204 15:39:28.848954   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:39:28.859197   10206 logs.go:282] 2 containers: [1f92512b0d23 b0ad1b935d01]
	I1204 15:39:28.859277   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:39:28.869625   10206 logs.go:282] 1 containers: [e290e266aab8]
	I1204 15:39:28.869696   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:39:28.880352   10206 logs.go:282] 2 containers: [4469186baff0 63473edefa8f]
	I1204 15:39:28.880419   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:39:28.895338   10206 logs.go:282] 0 containers: []
	W1204 15:39:28.895352   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:39:28.895425   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:39:28.907916   10206 logs.go:282] 2 containers: [4bf3c6abead9 4a54859eb1e5]
	I1204 15:39:28.907938   10206 logs.go:123] Gathering logs for kube-apiserver [bc235f7c7828] ...
	I1204 15:39:28.907943   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc235f7c7828"
	I1204 15:39:28.921889   10206 logs.go:123] Gathering logs for etcd [fc48923b85fc] ...
	I1204 15:39:28.921901   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc48923b85fc"
	I1204 15:39:28.935415   10206 logs.go:123] Gathering logs for etcd [4f1790594676] ...
	I1204 15:39:28.935426   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f1790594676"
	I1204 15:39:28.949703   10206 logs.go:123] Gathering logs for kube-scheduler [1f92512b0d23] ...
	I1204 15:39:28.949713   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f92512b0d23"
	I1204 15:39:28.961404   10206 logs.go:123] Gathering logs for kube-controller-manager [4469186baff0] ...
	I1204 15:39:28.961415   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4469186baff0"
	I1204 15:39:28.978334   10206 logs.go:123] Gathering logs for kube-controller-manager [63473edefa8f] ...
	I1204 15:39:28.978344   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63473edefa8f"
	I1204 15:39:28.993521   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:39:28.993533   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:39:29.032810   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:39:29.032821   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:39:29.037480   10206 logs.go:123] Gathering logs for kube-proxy [e290e266aab8] ...
	I1204 15:39:29.037489   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e290e266aab8"
	I1204 15:39:29.051951   10206 logs.go:123] Gathering logs for storage-provisioner [4bf3c6abead9] ...
	I1204 15:39:29.051961   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bf3c6abead9"
	I1204 15:39:29.064183   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:39:29.064194   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:39:29.076357   10206 logs.go:123] Gathering logs for kube-apiserver [1d1ae4543cd6] ...
	I1204 15:39:29.076370   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d1ae4543cd6"
	I1204 15:39:29.119194   10206 logs.go:123] Gathering logs for coredns [467f47d9f689] ...
	I1204 15:39:29.119207   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467f47d9f689"
	I1204 15:39:29.130964   10206 logs.go:123] Gathering logs for kube-scheduler [b0ad1b935d01] ...
	I1204 15:39:29.130976   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0ad1b935d01"
	I1204 15:39:29.150521   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:39:29.150532   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:39:29.185584   10206 logs.go:123] Gathering logs for storage-provisioner [4a54859eb1e5] ...
	I1204 15:39:29.185596   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a54859eb1e5"
	I1204 15:39:29.197369   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:39:29.197382   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:39:31.724167   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:39:29.044019    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:39:29.044055    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:39:36.726618   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:39:36.726839   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:39:36.746903   10206 logs.go:282] 2 containers: [bc235f7c7828 1d1ae4543cd6]
	I1204 15:39:36.747013   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:39:36.760964   10206 logs.go:282] 2 containers: [fc48923b85fc 4f1790594676]
	I1204 15:39:36.761049   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:39:36.772469   10206 logs.go:282] 1 containers: [467f47d9f689]
	I1204 15:39:36.772560   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:39:34.045640    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:39:34.045687    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:39:36.782811   10206 logs.go:282] 2 containers: [1f92512b0d23 b0ad1b935d01]
	I1204 15:39:36.782911   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:39:36.793334   10206 logs.go:282] 1 containers: [e290e266aab8]
	I1204 15:39:36.793401   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:39:36.804992   10206 logs.go:282] 2 containers: [4469186baff0 63473edefa8f]
	I1204 15:39:36.805067   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:39:36.815849   10206 logs.go:282] 0 containers: []
	W1204 15:39:36.815863   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:39:36.815928   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:39:36.831130   10206 logs.go:282] 2 containers: [4bf3c6abead9 4a54859eb1e5]
	I1204 15:39:36.831150   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:39:36.831155   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:39:36.845862   10206 logs.go:123] Gathering logs for etcd [4f1790594676] ...
	I1204 15:39:36.845873   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f1790594676"
	I1204 15:39:36.866037   10206 logs.go:123] Gathering logs for kube-controller-manager [63473edefa8f] ...
	I1204 15:39:36.866047   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63473edefa8f"
	I1204 15:39:36.890945   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:39:36.890956   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:39:36.916905   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:39:36.916914   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:39:36.956697   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:39:36.956709   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:39:36.997177   10206 logs.go:123] Gathering logs for kube-apiserver [bc235f7c7828] ...
	I1204 15:39:36.997192   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc235f7c7828"
	I1204 15:39:37.011335   10206 logs.go:123] Gathering logs for coredns [467f47d9f689] ...
	I1204 15:39:37.011345   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467f47d9f689"
	I1204 15:39:37.026486   10206 logs.go:123] Gathering logs for kube-controller-manager [4469186baff0] ...
	I1204 15:39:37.026497   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4469186baff0"
	I1204 15:39:37.048729   10206 logs.go:123] Gathering logs for kube-apiserver [1d1ae4543cd6] ...
	I1204 15:39:37.048745   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d1ae4543cd6"
	I1204 15:39:37.087842   10206 logs.go:123] Gathering logs for storage-provisioner [4a54859eb1e5] ...
	I1204 15:39:37.087854   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a54859eb1e5"
	I1204 15:39:37.099142   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:39:37.099156   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:39:37.111866   10206 logs.go:123] Gathering logs for etcd [fc48923b85fc] ...
	I1204 15:39:37.111878   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc48923b85fc"
	I1204 15:39:37.126949   10206 logs.go:123] Gathering logs for kube-scheduler [1f92512b0d23] ...
	I1204 15:39:37.126991   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f92512b0d23"
	I1204 15:39:37.140836   10206 logs.go:123] Gathering logs for kube-scheduler [b0ad1b935d01] ...
	I1204 15:39:37.140844   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0ad1b935d01"
	I1204 15:39:37.156020   10206 logs.go:123] Gathering logs for kube-proxy [e290e266aab8] ...
	I1204 15:39:37.156036   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e290e266aab8"
	I1204 15:39:37.167543   10206 logs.go:123] Gathering logs for storage-provisioner [4bf3c6abead9] ...
	I1204 15:39:37.167553   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bf3c6abead9"
	I1204 15:39:39.680799   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:39:39.047267    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:39:39.047311    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:39:44.683156   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:39:44.683406   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:39:44.717417   10206 logs.go:282] 2 containers: [bc235f7c7828 1d1ae4543cd6]
	I1204 15:39:44.717532   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:39:44.733342   10206 logs.go:282] 2 containers: [fc48923b85fc 4f1790594676]
	I1204 15:39:44.733430   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:39:44.745288   10206 logs.go:282] 1 containers: [467f47d9f689]
	I1204 15:39:44.745371   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:39:44.756495   10206 logs.go:282] 2 containers: [1f92512b0d23 b0ad1b935d01]
	I1204 15:39:44.756577   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:39:44.766907   10206 logs.go:282] 1 containers: [e290e266aab8]
	I1204 15:39:44.766985   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:39:44.782194   10206 logs.go:282] 2 containers: [4469186baff0 63473edefa8f]
	I1204 15:39:44.782274   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:39:44.792720   10206 logs.go:282] 0 containers: []
	W1204 15:39:44.792732   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:39:44.792802   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:39:44.805885   10206 logs.go:282] 2 containers: [4bf3c6abead9 4a54859eb1e5]
	I1204 15:39:44.805903   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:39:44.805909   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:39:44.841033   10206 logs.go:123] Gathering logs for etcd [fc48923b85fc] ...
	I1204 15:39:44.841046   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc48923b85fc"
	I1204 15:39:44.860926   10206 logs.go:123] Gathering logs for storage-provisioner [4bf3c6abead9] ...
	I1204 15:39:44.860939   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bf3c6abead9"
	I1204 15:39:44.872657   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:39:44.872669   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:39:44.897654   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:39:44.897664   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:39:44.901779   10206 logs.go:123] Gathering logs for kube-apiserver [bc235f7c7828] ...
	I1204 15:39:44.901786   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc235f7c7828"
	I1204 15:39:44.919543   10206 logs.go:123] Gathering logs for kube-scheduler [1f92512b0d23] ...
	I1204 15:39:44.919557   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f92512b0d23"
	I1204 15:39:44.935044   10206 logs.go:123] Gathering logs for kube-controller-manager [63473edefa8f] ...
	I1204 15:39:44.935057   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63473edefa8f"
	I1204 15:39:44.947749   10206 logs.go:123] Gathering logs for kube-apiserver [1d1ae4543cd6] ...
	I1204 15:39:44.947761   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d1ae4543cd6"
	I1204 15:39:44.985918   10206 logs.go:123] Gathering logs for etcd [4f1790594676] ...
	I1204 15:39:44.985931   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f1790594676"
	I1204 15:39:45.001073   10206 logs.go:123] Gathering logs for storage-provisioner [4a54859eb1e5] ...
	I1204 15:39:45.001083   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a54859eb1e5"
	I1204 15:39:45.012770   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:39:45.012781   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:39:45.025513   10206 logs.go:123] Gathering logs for kube-controller-manager [4469186baff0] ...
	I1204 15:39:45.025523   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4469186baff0"
	I1204 15:39:45.043229   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:39:45.043242   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:39:45.080926   10206 logs.go:123] Gathering logs for coredns [467f47d9f689] ...
	I1204 15:39:45.080936   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467f47d9f689"
	I1204 15:39:45.092246   10206 logs.go:123] Gathering logs for kube-scheduler [b0ad1b935d01] ...
	I1204 15:39:45.092257   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0ad1b935d01"
	I1204 15:39:45.107800   10206 logs.go:123] Gathering logs for kube-proxy [e290e266aab8] ...
	I1204 15:39:45.107811   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e290e266aab8"
	I1204 15:39:44.048919    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:39:44.048960    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:39:47.621948   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:39:49.051251    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:39:49.051374    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:39:49.062281    9926 logs.go:282] 1 containers: [f3328f94ed0d]
	I1204 15:39:49.062356    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:39:49.073300    9926 logs.go:282] 1 containers: [17b0ed658f6c]
	I1204 15:39:49.073382    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:39:49.084533    9926 logs.go:282] 2 containers: [0a3178099d31 c8ddfa007847]
	I1204 15:39:49.084605    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:39:49.094874    9926 logs.go:282] 1 containers: [a82022fd2242]
	I1204 15:39:49.094968    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:39:49.107157    9926 logs.go:282] 1 containers: [8f7d833df5c3]
	I1204 15:39:49.107246    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:39:49.118449    9926 logs.go:282] 1 containers: [bdf1070f199f]
	I1204 15:39:49.118520    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:39:49.129302    9926 logs.go:282] 0 containers: []
	W1204 15:39:49.129312    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:39:49.129373    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:39:49.139777    9926 logs.go:282] 1 containers: [bdb47901ddb7]
	I1204 15:39:49.139795    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:39:49.139800    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:39:49.175321    9926 logs.go:123] Gathering logs for coredns [0a3178099d31] ...
	I1204 15:39:49.175336    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a3178099d31"
	I1204 15:39:49.191840    9926 logs.go:123] Gathering logs for coredns [c8ddfa007847] ...
	I1204 15:39:49.191851    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8ddfa007847"
	I1204 15:39:49.203325    9926 logs.go:123] Gathering logs for kube-scheduler [a82022fd2242] ...
	I1204 15:39:49.203339    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a82022fd2242"
	I1204 15:39:49.218223    9926 logs.go:123] Gathering logs for kube-controller-manager [bdf1070f199f] ...
	I1204 15:39:49.218234    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdf1070f199f"
	I1204 15:39:49.235891    9926 logs.go:123] Gathering logs for storage-provisioner [bdb47901ddb7] ...
	I1204 15:39:49.235901    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb47901ddb7"
	I1204 15:39:49.247115    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:39:49.247126    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:39:49.258826    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:39:49.258840    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:39:49.263235    9926 logs.go:123] Gathering logs for kube-apiserver [f3328f94ed0d] ...
	I1204 15:39:49.263245    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3328f94ed0d"
	I1204 15:39:49.281850    9926 logs.go:123] Gathering logs for etcd [17b0ed658f6c] ...
	I1204 15:39:49.281863    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b0ed658f6c"
	I1204 15:39:49.298398    9926 logs.go:123] Gathering logs for kube-proxy [8f7d833df5c3] ...
	I1204 15:39:49.298409    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d833df5c3"
	I1204 15:39:49.312450    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:39:49.312465    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:39:49.338724    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:39:49.338770    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:39:52.624374   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:39:52.624545   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:39:52.636847   10206 logs.go:282] 2 containers: [bc235f7c7828 1d1ae4543cd6]
	I1204 15:39:52.636934   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:39:52.648061   10206 logs.go:282] 2 containers: [fc48923b85fc 4f1790594676]
	I1204 15:39:52.648138   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:39:52.658851   10206 logs.go:282] 1 containers: [467f47d9f689]
	I1204 15:39:52.658932   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:39:52.673000   10206 logs.go:282] 2 containers: [1f92512b0d23 b0ad1b935d01]
	I1204 15:39:52.673080   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:39:52.683192   10206 logs.go:282] 1 containers: [e290e266aab8]
	I1204 15:39:52.683274   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:39:52.700660   10206 logs.go:282] 2 containers: [4469186baff0 63473edefa8f]
	I1204 15:39:52.700733   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:39:52.713316   10206 logs.go:282] 0 containers: []
	W1204 15:39:52.713327   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:39:52.713401   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:39:52.723617   10206 logs.go:282] 2 containers: [4bf3c6abead9 4a54859eb1e5]
	I1204 15:39:52.723643   10206 logs.go:123] Gathering logs for etcd [4f1790594676] ...
	I1204 15:39:52.723655   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f1790594676"
	I1204 15:39:52.738330   10206 logs.go:123] Gathering logs for coredns [467f47d9f689] ...
	I1204 15:39:52.738344   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467f47d9f689"
	I1204 15:39:52.751263   10206 logs.go:123] Gathering logs for kube-scheduler [b0ad1b935d01] ...
	I1204 15:39:52.751276   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0ad1b935d01"
	I1204 15:39:52.771471   10206 logs.go:123] Gathering logs for kube-controller-manager [4469186baff0] ...
	I1204 15:39:52.771484   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4469186baff0"
	I1204 15:39:52.788985   10206 logs.go:123] Gathering logs for kube-controller-manager [63473edefa8f] ...
	I1204 15:39:52.788995   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63473edefa8f"
	I1204 15:39:52.802441   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:39:52.802456   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:39:52.839973   10206 logs.go:123] Gathering logs for kube-scheduler [1f92512b0d23] ...
	I1204 15:39:52.839987   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f92512b0d23"
	I1204 15:39:52.851565   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:39:52.851576   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:39:52.888837   10206 logs.go:123] Gathering logs for kube-proxy [e290e266aab8] ...
	I1204 15:39:52.888848   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e290e266aab8"
	I1204 15:39:52.900697   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:39:52.900707   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:39:52.924250   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:39:52.924260   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:39:52.929349   10206 logs.go:123] Gathering logs for kube-apiserver [1d1ae4543cd6] ...
	I1204 15:39:52.929357   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d1ae4543cd6"
	I1204 15:39:52.972535   10206 logs.go:123] Gathering logs for etcd [fc48923b85fc] ...
	I1204 15:39:52.972545   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc48923b85fc"
	I1204 15:39:52.986453   10206 logs.go:123] Gathering logs for storage-provisioner [4bf3c6abead9] ...
	I1204 15:39:52.986466   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bf3c6abead9"
	I1204 15:39:52.998801   10206 logs.go:123] Gathering logs for storage-provisioner [4a54859eb1e5] ...
	I1204 15:39:52.998813   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a54859eb1e5"
	I1204 15:39:53.009826   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:39:53.009837   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:39:53.022167   10206 logs.go:123] Gathering logs for kube-apiserver [bc235f7c7828] ...
	I1204 15:39:53.022179   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc235f7c7828"
	I1204 15:39:55.538830   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:39:51.878758    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:40:00.541284   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:40:00.541547   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:40:00.565963   10206 logs.go:282] 2 containers: [bc235f7c7828 1d1ae4543cd6]
	I1204 15:40:00.566095   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:40:00.582174   10206 logs.go:282] 2 containers: [fc48923b85fc 4f1790594676]
	I1204 15:40:00.582276   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:40:00.595008   10206 logs.go:282] 1 containers: [467f47d9f689]
	I1204 15:40:00.595083   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:40:00.606117   10206 logs.go:282] 2 containers: [1f92512b0d23 b0ad1b935d01]
	I1204 15:40:00.606201   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:40:00.616277   10206 logs.go:282] 1 containers: [e290e266aab8]
	I1204 15:40:00.616352   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:40:00.627122   10206 logs.go:282] 2 containers: [4469186baff0 63473edefa8f]
	I1204 15:40:00.627204   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:40:00.637742   10206 logs.go:282] 0 containers: []
	W1204 15:40:00.637755   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:40:00.637821   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:40:00.648926   10206 logs.go:282] 2 containers: [4bf3c6abead9 4a54859eb1e5]
	I1204 15:40:00.648944   10206 logs.go:123] Gathering logs for etcd [4f1790594676] ...
	I1204 15:40:00.648950   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f1790594676"
	I1204 15:40:00.663760   10206 logs.go:123] Gathering logs for kube-proxy [e290e266aab8] ...
	I1204 15:40:00.663774   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e290e266aab8"
	I1204 15:40:00.676074   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:40:00.676085   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:40:00.680857   10206 logs.go:123] Gathering logs for kube-apiserver [1d1ae4543cd6] ...
	I1204 15:40:00.680866   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d1ae4543cd6"
	I1204 15:40:00.718318   10206 logs.go:123] Gathering logs for kube-controller-manager [63473edefa8f] ...
	I1204 15:40:00.718330   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63473edefa8f"
	I1204 15:40:00.739084   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:40:00.739096   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:40:00.751307   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:40:00.751317   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:40:00.791121   10206 logs.go:123] Gathering logs for kube-scheduler [1f92512b0d23] ...
	I1204 15:40:00.791131   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f92512b0d23"
	I1204 15:40:00.802764   10206 logs.go:123] Gathering logs for coredns [467f47d9f689] ...
	I1204 15:40:00.802773   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467f47d9f689"
	I1204 15:40:00.813862   10206 logs.go:123] Gathering logs for storage-provisioner [4bf3c6abead9] ...
	I1204 15:40:00.813872   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bf3c6abead9"
	I1204 15:40:00.825626   10206 logs.go:123] Gathering logs for etcd [fc48923b85fc] ...
	I1204 15:40:00.825639   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc48923b85fc"
	I1204 15:40:00.839573   10206 logs.go:123] Gathering logs for kube-scheduler [b0ad1b935d01] ...
	I1204 15:40:00.839584   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0ad1b935d01"
	I1204 15:40:00.854781   10206 logs.go:123] Gathering logs for kube-controller-manager [4469186baff0] ...
	I1204 15:40:00.854791   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4469186baff0"
	I1204 15:40:00.871924   10206 logs.go:123] Gathering logs for storage-provisioner [4a54859eb1e5] ...
	I1204 15:40:00.871934   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a54859eb1e5"
	I1204 15:40:00.886356   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:40:00.886366   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:40:00.910877   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:40:00.910887   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:40:00.946768   10206 logs.go:123] Gathering logs for kube-apiserver [bc235f7c7828] ...
	I1204 15:40:00.946785   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc235f7c7828"
	I1204 15:39:56.881561    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:39:56.881921    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:39:56.912278    9926 logs.go:282] 1 containers: [f3328f94ed0d]
	I1204 15:39:56.912425    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:39:56.930585    9926 logs.go:282] 1 containers: [17b0ed658f6c]
	I1204 15:39:56.930697    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:39:56.944525    9926 logs.go:282] 2 containers: [0a3178099d31 c8ddfa007847]
	I1204 15:39:56.944615    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:39:56.956571    9926 logs.go:282] 1 containers: [a82022fd2242]
	I1204 15:39:56.956656    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:39:56.967455    9926 logs.go:282] 1 containers: [8f7d833df5c3]
	I1204 15:39:56.967538    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:39:56.978355    9926 logs.go:282] 1 containers: [bdf1070f199f]
	I1204 15:39:56.978437    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:39:56.989022    9926 logs.go:282] 0 containers: []
	W1204 15:39:56.989036    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:39:56.989108    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:39:57.000371    9926 logs.go:282] 1 containers: [bdb47901ddb7]
	I1204 15:39:57.000386    9926 logs.go:123] Gathering logs for kube-apiserver [f3328f94ed0d] ...
	I1204 15:39:57.000391    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3328f94ed0d"
	I1204 15:39:57.014878    9926 logs.go:123] Gathering logs for coredns [0a3178099d31] ...
	I1204 15:39:57.014892    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a3178099d31"
	I1204 15:39:57.027054    9926 logs.go:123] Gathering logs for kube-scheduler [a82022fd2242] ...
	I1204 15:39:57.027068    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a82022fd2242"
	I1204 15:39:57.042622    9926 logs.go:123] Gathering logs for kube-proxy [8f7d833df5c3] ...
	I1204 15:39:57.042633    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d833df5c3"
	I1204 15:39:57.055414    9926 logs.go:123] Gathering logs for kube-controller-manager [bdf1070f199f] ...
	I1204 15:39:57.055425    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdf1070f199f"
	I1204 15:39:57.072606    9926 logs.go:123] Gathering logs for storage-provisioner [bdb47901ddb7] ...
	I1204 15:39:57.072616    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb47901ddb7"
	I1204 15:39:57.083943    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:39:57.083955    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:39:57.123256    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:39:57.123266    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:39:57.128471    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:39:57.128479    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:39:57.153467    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:39:57.153478    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:39:57.164788    9926 logs.go:123] Gathering logs for coredns [c8ddfa007847] ...
	I1204 15:39:57.164799    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8ddfa007847"
	I1204 15:39:57.176644    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:39:57.176658    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:39:57.210473    9926 logs.go:123] Gathering logs for etcd [17b0ed658f6c] ...
	I1204 15:39:57.210486    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b0ed658f6c"
	I1204 15:39:59.729849    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:40:03.462719   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:40:04.732343    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:40:04.732599    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:40:04.755475    9926 logs.go:282] 1 containers: [f3328f94ed0d]
	I1204 15:40:04.755606    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:40:04.778623    9926 logs.go:282] 1 containers: [17b0ed658f6c]
	I1204 15:40:04.778700    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:40:04.790263    9926 logs.go:282] 2 containers: [0a3178099d31 c8ddfa007847]
	I1204 15:40:04.790334    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:40:04.800553    9926 logs.go:282] 1 containers: [a82022fd2242]
	I1204 15:40:04.800638    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:40:04.814573    9926 logs.go:282] 1 containers: [8f7d833df5c3]
	I1204 15:40:04.814657    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:40:04.825085    9926 logs.go:282] 1 containers: [bdf1070f199f]
	I1204 15:40:04.825161    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:40:04.834487    9926 logs.go:282] 0 containers: []
	W1204 15:40:04.834498    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:40:04.834560    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:40:04.845290    9926 logs.go:282] 1 containers: [bdb47901ddb7]
	I1204 15:40:04.845305    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:40:04.845311    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:40:04.883556    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:40:04.883568    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:40:04.929160    9926 logs.go:123] Gathering logs for kube-apiserver [f3328f94ed0d] ...
	I1204 15:40:04.929172    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3328f94ed0d"
	I1204 15:40:04.944660    9926 logs.go:123] Gathering logs for etcd [17b0ed658f6c] ...
	I1204 15:40:04.944679    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b0ed658f6c"
	I1204 15:40:04.959157    9926 logs.go:123] Gathering logs for kube-scheduler [a82022fd2242] ...
	I1204 15:40:04.959170    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a82022fd2242"
	I1204 15:40:04.974067    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:40:04.974081    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:40:04.985806    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:40:04.985818    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:40:04.990332    9926 logs.go:123] Gathering logs for coredns [0a3178099d31] ...
	I1204 15:40:04.990342    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a3178099d31"
	I1204 15:40:05.002169    9926 logs.go:123] Gathering logs for coredns [c8ddfa007847] ...
	I1204 15:40:05.002181    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8ddfa007847"
	I1204 15:40:05.013979    9926 logs.go:123] Gathering logs for kube-proxy [8f7d833df5c3] ...
	I1204 15:40:05.013990    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d833df5c3"
	I1204 15:40:05.030307    9926 logs.go:123] Gathering logs for kube-controller-manager [bdf1070f199f] ...
	I1204 15:40:05.030319    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdf1070f199f"
	I1204 15:40:05.047838    9926 logs.go:123] Gathering logs for storage-provisioner [bdb47901ddb7] ...
	I1204 15:40:05.047848    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb47901ddb7"
	I1204 15:40:05.059665    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:40:05.059676    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:40:08.465166   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:40:08.465548   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:40:08.496457   10206 logs.go:282] 2 containers: [bc235f7c7828 1d1ae4543cd6]
	I1204 15:40:08.496599   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:40:08.514955   10206 logs.go:282] 2 containers: [fc48923b85fc 4f1790594676]
	I1204 15:40:08.515064   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:40:08.528919   10206 logs.go:282] 1 containers: [467f47d9f689]
	I1204 15:40:08.529004   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:40:08.541262   10206 logs.go:282] 2 containers: [1f92512b0d23 b0ad1b935d01]
	I1204 15:40:08.541370   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:40:08.551749   10206 logs.go:282] 1 containers: [e290e266aab8]
	I1204 15:40:08.551823   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:40:08.575004   10206 logs.go:282] 2 containers: [4469186baff0 63473edefa8f]
	I1204 15:40:08.575083   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:40:08.585601   10206 logs.go:282] 0 containers: []
	W1204 15:40:08.585612   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:40:08.585672   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:40:08.596468   10206 logs.go:282] 2 containers: [4bf3c6abead9 4a54859eb1e5]
	I1204 15:40:08.596487   10206 logs.go:123] Gathering logs for kube-apiserver [bc235f7c7828] ...
	I1204 15:40:08.596492   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc235f7c7828"
	I1204 15:40:08.610780   10206 logs.go:123] Gathering logs for kube-controller-manager [63473edefa8f] ...
	I1204 15:40:08.610793   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63473edefa8f"
	I1204 15:40:08.627049   10206 logs.go:123] Gathering logs for storage-provisioner [4a54859eb1e5] ...
	I1204 15:40:08.627062   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a54859eb1e5"
	I1204 15:40:08.638681   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:40:08.638692   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:40:08.650893   10206 logs.go:123] Gathering logs for kube-scheduler [1f92512b0d23] ...
	I1204 15:40:08.650907   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f92512b0d23"
	I1204 15:40:08.662722   10206 logs.go:123] Gathering logs for kube-scheduler [b0ad1b935d01] ...
	I1204 15:40:08.662732   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0ad1b935d01"
	I1204 15:40:08.678207   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:40:08.678217   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:40:08.702861   10206 logs.go:123] Gathering logs for storage-provisioner [4bf3c6abead9] ...
	I1204 15:40:08.702868   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bf3c6abead9"
	I1204 15:40:08.726088   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:40:08.726101   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:40:08.760789   10206 logs.go:123] Gathering logs for kube-apiserver [1d1ae4543cd6] ...
	I1204 15:40:08.760803   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d1ae4543cd6"
	I1204 15:40:08.798173   10206 logs.go:123] Gathering logs for coredns [467f47d9f689] ...
	I1204 15:40:08.798184   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467f47d9f689"
	I1204 15:40:08.809252   10206 logs.go:123] Gathering logs for etcd [4f1790594676] ...
	I1204 15:40:08.809265   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f1790594676"
	I1204 15:40:08.825480   10206 logs.go:123] Gathering logs for kube-proxy [e290e266aab8] ...
	I1204 15:40:08.825491   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e290e266aab8"
	I1204 15:40:08.837823   10206 logs.go:123] Gathering logs for kube-controller-manager [4469186baff0] ...
	I1204 15:40:08.837836   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4469186baff0"
	I1204 15:40:08.856047   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:40:08.856058   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:40:08.893515   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:40:08.893526   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:40:08.897407   10206 logs.go:123] Gathering logs for etcd [fc48923b85fc] ...
	I1204 15:40:08.897413   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc48923b85fc"
	I1204 15:40:11.413758   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:40:07.585610    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:40:16.416065   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:40:16.416172   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:40:16.428721   10206 logs.go:282] 2 containers: [bc235f7c7828 1d1ae4543cd6]
	I1204 15:40:16.428806   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:40:16.439292   10206 logs.go:282] 2 containers: [fc48923b85fc 4f1790594676]
	I1204 15:40:16.439370   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:40:16.449554   10206 logs.go:282] 1 containers: [467f47d9f689]
	I1204 15:40:16.449633   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:40:16.459909   10206 logs.go:282] 2 containers: [1f92512b0d23 b0ad1b935d01]
	I1204 15:40:16.459984   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:40:16.470092   10206 logs.go:282] 1 containers: [e290e266aab8]
	I1204 15:40:16.470173   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:40:16.480922   10206 logs.go:282] 2 containers: [4469186baff0 63473edefa8f]
	I1204 15:40:16.481000   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:40:16.496130   10206 logs.go:282] 0 containers: []
	W1204 15:40:16.496140   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:40:16.496209   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:40:16.506437   10206 logs.go:282] 2 containers: [4bf3c6abead9 4a54859eb1e5]
	I1204 15:40:16.506455   10206 logs.go:123] Gathering logs for coredns [467f47d9f689] ...
	I1204 15:40:16.506459   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467f47d9f689"
	I1204 15:40:16.517611   10206 logs.go:123] Gathering logs for kube-proxy [e290e266aab8] ...
	I1204 15:40:16.517622   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e290e266aab8"
	I1204 15:40:16.534633   10206 logs.go:123] Gathering logs for kube-apiserver [bc235f7c7828] ...
	I1204 15:40:16.534646   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc235f7c7828"
	I1204 15:40:16.548947   10206 logs.go:123] Gathering logs for kube-apiserver [1d1ae4543cd6] ...
	I1204 15:40:16.548957   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d1ae4543cd6"
	I1204 15:40:16.586084   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:40:16.586095   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:40:16.590214   10206 logs.go:123] Gathering logs for kube-scheduler [1f92512b0d23] ...
	I1204 15:40:16.590222   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f92512b0d23"
	I1204 15:40:16.601951   10206 logs.go:123] Gathering logs for kube-scheduler [b0ad1b935d01] ...
	I1204 15:40:16.601961   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0ad1b935d01"
	I1204 15:40:16.617763   10206 logs.go:123] Gathering logs for kube-controller-manager [63473edefa8f] ...
	I1204 15:40:16.617774   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63473edefa8f"
	I1204 15:40:16.631266   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:40:16.631277   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:40:16.656193   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:40:16.656200   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:40:16.668121   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:40:16.668131   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:40:16.707274   10206 logs.go:123] Gathering logs for etcd [4f1790594676] ...
	I1204 15:40:16.707285   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f1790594676"
	I1204 15:40:16.722248   10206 logs.go:123] Gathering logs for kube-controller-manager [4469186baff0] ...
	I1204 15:40:16.722259   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4469186baff0"
	I1204 15:40:16.739247   10206 logs.go:123] Gathering logs for storage-provisioner [4bf3c6abead9] ...
	I1204 15:40:16.739260   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bf3c6abead9"
	I1204 15:40:16.750608   10206 logs.go:123] Gathering logs for storage-provisioner [4a54859eb1e5] ...
	I1204 15:40:16.750618   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a54859eb1e5"
	I1204 15:40:16.761841   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:40:16.761852   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:40:12.586412    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:40:12.586636    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:40:12.604657    9926 logs.go:282] 1 containers: [f3328f94ed0d]
	I1204 15:40:12.604768    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:40:12.618629    9926 logs.go:282] 1 containers: [17b0ed658f6c]
	I1204 15:40:12.618721    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:40:12.630113    9926 logs.go:282] 2 containers: [0a3178099d31 c8ddfa007847]
	I1204 15:40:12.630196    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:40:12.645123    9926 logs.go:282] 1 containers: [a82022fd2242]
	I1204 15:40:12.645199    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:40:12.655356    9926 logs.go:282] 1 containers: [8f7d833df5c3]
	I1204 15:40:12.655443    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:40:12.665913    9926 logs.go:282] 1 containers: [bdf1070f199f]
	I1204 15:40:12.665989    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:40:12.676076    9926 logs.go:282] 0 containers: []
	W1204 15:40:12.676090    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:40:12.676153    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:40:12.686878    9926 logs.go:282] 1 containers: [bdb47901ddb7]
	I1204 15:40:12.686892    9926 logs.go:123] Gathering logs for coredns [c8ddfa007847] ...
	I1204 15:40:12.686898    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8ddfa007847"
	I1204 15:40:12.700261    9926 logs.go:123] Gathering logs for kube-scheduler [a82022fd2242] ...
	I1204 15:40:12.700272    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a82022fd2242"
	I1204 15:40:12.716930    9926 logs.go:123] Gathering logs for kube-controller-manager [bdf1070f199f] ...
	I1204 15:40:12.716941    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdf1070f199f"
	I1204 15:40:12.737771    9926 logs.go:123] Gathering logs for storage-provisioner [bdb47901ddb7] ...
	I1204 15:40:12.737783    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb47901ddb7"
	I1204 15:40:12.749143    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:40:12.749154    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:40:12.753777    9926 logs.go:123] Gathering logs for kube-apiserver [f3328f94ed0d] ...
	I1204 15:40:12.753784    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3328f94ed0d"
	I1204 15:40:12.767927    9926 logs.go:123] Gathering logs for etcd [17b0ed658f6c] ...
	I1204 15:40:12.767938    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b0ed658f6c"
	I1204 15:40:12.781601    9926 logs.go:123] Gathering logs for coredns [0a3178099d31] ...
	I1204 15:40:12.781615    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a3178099d31"
	I1204 15:40:12.793913    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:40:12.793923    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:40:12.818059    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:40:12.818068    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:40:12.830044    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:40:12.830053    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:40:12.867947    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:40:12.867955    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:40:12.908907    9926 logs.go:123] Gathering logs for kube-proxy [8f7d833df5c3] ...
	I1204 15:40:12.908922    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d833df5c3"
	I1204 15:40:15.423320    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:40:16.797813   10206 logs.go:123] Gathering logs for etcd [fc48923b85fc] ...
	I1204 15:40:16.797824   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc48923b85fc"
	I1204 15:40:19.313748   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:40:20.424250    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:40:20.424764    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:40:20.463846    9926 logs.go:282] 1 containers: [f3328f94ed0d]
	I1204 15:40:20.464004    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:40:20.484745    9926 logs.go:282] 1 containers: [17b0ed658f6c]
	I1204 15:40:20.484884    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:40:20.500083    9926 logs.go:282] 2 containers: [0a3178099d31 c8ddfa007847]
	I1204 15:40:20.500175    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:40:20.513085    9926 logs.go:282] 1 containers: [a82022fd2242]
	I1204 15:40:20.513174    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:40:20.524377    9926 logs.go:282] 1 containers: [8f7d833df5c3]
	I1204 15:40:20.524457    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:40:20.535027    9926 logs.go:282] 1 containers: [bdf1070f199f]
	I1204 15:40:20.535109    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:40:20.545514    9926 logs.go:282] 0 containers: []
	W1204 15:40:20.545524    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:40:20.545587    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:40:20.556184    9926 logs.go:282] 1 containers: [bdb47901ddb7]
	I1204 15:40:20.556201    9926 logs.go:123] Gathering logs for coredns [0a3178099d31] ...
	I1204 15:40:20.556207    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a3178099d31"
	I1204 15:40:20.570857    9926 logs.go:123] Gathering logs for kube-scheduler [a82022fd2242] ...
	I1204 15:40:20.570868    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a82022fd2242"
	I1204 15:40:20.586042    9926 logs.go:123] Gathering logs for kube-controller-manager [bdf1070f199f] ...
	I1204 15:40:20.586052    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdf1070f199f"
	I1204 15:40:20.603930    9926 logs.go:123] Gathering logs for storage-provisioner [bdb47901ddb7] ...
	I1204 15:40:20.603945    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb47901ddb7"
	I1204 15:40:20.616149    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:40:20.616160    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:40:20.652821    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:40:20.652830    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:40:20.657313    9926 logs.go:123] Gathering logs for kube-apiserver [f3328f94ed0d] ...
	I1204 15:40:20.657320    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3328f94ed0d"
	I1204 15:40:20.672320    9926 logs.go:123] Gathering logs for etcd [17b0ed658f6c] ...
	I1204 15:40:20.672333    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b0ed658f6c"
	I1204 15:40:20.687312    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:40:20.687325    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:40:20.699414    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:40:20.699424    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:40:20.741409    9926 logs.go:123] Gathering logs for coredns [c8ddfa007847] ...
	I1204 15:40:20.741421    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8ddfa007847"
	I1204 15:40:20.752829    9926 logs.go:123] Gathering logs for kube-proxy [8f7d833df5c3] ...
	I1204 15:40:20.752844    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d833df5c3"
	I1204 15:40:20.764465    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:40:20.764478    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:40:24.315479   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:40:24.315649   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:40:24.332046   10206 logs.go:282] 2 containers: [bc235f7c7828 1d1ae4543cd6]
	I1204 15:40:24.332148   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:40:24.346645   10206 logs.go:282] 2 containers: [fc48923b85fc 4f1790594676]
	I1204 15:40:24.346722   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:40:24.357300   10206 logs.go:282] 1 containers: [467f47d9f689]
	I1204 15:40:24.357384   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:40:24.367715   10206 logs.go:282] 2 containers: [1f92512b0d23 b0ad1b935d01]
	I1204 15:40:24.367799   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:40:24.378308   10206 logs.go:282] 1 containers: [e290e266aab8]
	I1204 15:40:24.378384   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:40:24.388970   10206 logs.go:282] 2 containers: [4469186baff0 63473edefa8f]
	I1204 15:40:24.389050   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:40:24.399912   10206 logs.go:282] 0 containers: []
	W1204 15:40:24.399923   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:40:24.399992   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:40:24.410658   10206 logs.go:282] 2 containers: [4bf3c6abead9 4a54859eb1e5]
	I1204 15:40:24.410678   10206 logs.go:123] Gathering logs for coredns [467f47d9f689] ...
	I1204 15:40:24.410683   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467f47d9f689"
	I1204 15:40:24.422482   10206 logs.go:123] Gathering logs for kube-scheduler [1f92512b0d23] ...
	I1204 15:40:24.422493   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f92512b0d23"
	I1204 15:40:24.434317   10206 logs.go:123] Gathering logs for kube-scheduler [b0ad1b935d01] ...
	I1204 15:40:24.434328   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0ad1b935d01"
	I1204 15:40:24.449407   10206 logs.go:123] Gathering logs for kube-proxy [e290e266aab8] ...
	I1204 15:40:24.449418   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e290e266aab8"
	I1204 15:40:24.460770   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:40:24.460779   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:40:24.497732   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:40:24.497742   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:40:24.532685   10206 logs.go:123] Gathering logs for etcd [fc48923b85fc] ...
	I1204 15:40:24.532695   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc48923b85fc"
	I1204 15:40:24.546916   10206 logs.go:123] Gathering logs for kube-controller-manager [4469186baff0] ...
	I1204 15:40:24.546928   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4469186baff0"
	I1204 15:40:24.564311   10206 logs.go:123] Gathering logs for storage-provisioner [4a54859eb1e5] ...
	I1204 15:40:24.564325   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a54859eb1e5"
	I1204 15:40:24.578354   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:40:24.578367   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:40:24.582541   10206 logs.go:123] Gathering logs for kube-apiserver [1d1ae4543cd6] ...
	I1204 15:40:24.582549   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d1ae4543cd6"
	I1204 15:40:24.620056   10206 logs.go:123] Gathering logs for etcd [4f1790594676] ...
	I1204 15:40:24.620068   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f1790594676"
	I1204 15:40:24.634711   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:40:24.634721   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:40:24.647114   10206 logs.go:123] Gathering logs for kube-apiserver [bc235f7c7828] ...
	I1204 15:40:24.647126   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc235f7c7828"
	I1204 15:40:24.661446   10206 logs.go:123] Gathering logs for kube-controller-manager [63473edefa8f] ...
	I1204 15:40:24.661457   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63473edefa8f"
	I1204 15:40:24.674271   10206 logs.go:123] Gathering logs for storage-provisioner [4bf3c6abead9] ...
	I1204 15:40:24.674282   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bf3c6abead9"
	I1204 15:40:24.685899   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:40:24.685910   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:40:23.290247    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:40:27.213232   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:40:28.292536    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:40:28.292676    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:40:28.306827    9926 logs.go:282] 1 containers: [f3328f94ed0d]
	I1204 15:40:28.306913    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:40:28.318591    9926 logs.go:282] 1 containers: [17b0ed658f6c]
	I1204 15:40:28.318672    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:40:28.329601    9926 logs.go:282] 2 containers: [0a3178099d31 c8ddfa007847]
	I1204 15:40:28.329681    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:40:28.340549    9926 logs.go:282] 1 containers: [a82022fd2242]
	I1204 15:40:28.340625    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:40:28.351079    9926 logs.go:282] 1 containers: [8f7d833df5c3]
	I1204 15:40:28.351153    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:40:28.361452    9926 logs.go:282] 1 containers: [bdf1070f199f]
	I1204 15:40:28.361529    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:40:28.372151    9926 logs.go:282] 0 containers: []
	W1204 15:40:28.372163    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:40:28.372226    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:40:28.382693    9926 logs.go:282] 1 containers: [bdb47901ddb7]
	I1204 15:40:28.382711    9926 logs.go:123] Gathering logs for kube-controller-manager [bdf1070f199f] ...
	I1204 15:40:28.382716    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdf1070f199f"
	I1204 15:40:28.399998    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:40:28.400011    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:40:28.404826    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:40:28.404833    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:40:28.438997    9926 logs.go:123] Gathering logs for kube-apiserver [f3328f94ed0d] ...
	I1204 15:40:28.439009    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3328f94ed0d"
	I1204 15:40:28.453290    9926 logs.go:123] Gathering logs for etcd [17b0ed658f6c] ...
	I1204 15:40:28.453301    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b0ed658f6c"
	I1204 15:40:28.467560    9926 logs.go:123] Gathering logs for coredns [c8ddfa007847] ...
	I1204 15:40:28.467574    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8ddfa007847"
	I1204 15:40:28.479120    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:40:28.479130    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:40:28.502530    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:40:28.502541    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:40:28.513566    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:40:28.513578    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:40:28.553100    9926 logs.go:123] Gathering logs for coredns [0a3178099d31] ...
	I1204 15:40:28.553111    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a3178099d31"
	I1204 15:40:28.564738    9926 logs.go:123] Gathering logs for kube-scheduler [a82022fd2242] ...
	I1204 15:40:28.564749    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a82022fd2242"
	I1204 15:40:28.579828    9926 logs.go:123] Gathering logs for kube-proxy [8f7d833df5c3] ...
	I1204 15:40:28.579840    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d833df5c3"
	I1204 15:40:28.592205    9926 logs.go:123] Gathering logs for storage-provisioner [bdb47901ddb7] ...
	I1204 15:40:28.592218    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb47901ddb7"
	I1204 15:40:31.105937    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:40:32.215568   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:40:32.215690   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:40:32.228059   10206 logs.go:282] 2 containers: [bc235f7c7828 1d1ae4543cd6]
	I1204 15:40:32.228141   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:40:32.241256   10206 logs.go:282] 2 containers: [fc48923b85fc 4f1790594676]
	I1204 15:40:32.241333   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:40:32.252162   10206 logs.go:282] 1 containers: [467f47d9f689]
	I1204 15:40:32.252248   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:40:32.263299   10206 logs.go:282] 2 containers: [1f92512b0d23 b0ad1b935d01]
	I1204 15:40:32.263383   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:40:32.274742   10206 logs.go:282] 1 containers: [e290e266aab8]
	I1204 15:40:32.274817   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:40:32.285280   10206 logs.go:282] 2 containers: [4469186baff0 63473edefa8f]
	I1204 15:40:32.285353   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:40:32.296138   10206 logs.go:282] 0 containers: []
	W1204 15:40:32.296149   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:40:32.296210   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:40:32.306682   10206 logs.go:282] 2 containers: [4bf3c6abead9 4a54859eb1e5]
	I1204 15:40:32.306702   10206 logs.go:123] Gathering logs for kube-proxy [e290e266aab8] ...
	I1204 15:40:32.306707   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e290e266aab8"
	I1204 15:40:32.318867   10206 logs.go:123] Gathering logs for kube-controller-manager [63473edefa8f] ...
	I1204 15:40:32.318877   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63473edefa8f"
	I1204 15:40:32.332966   10206 logs.go:123] Gathering logs for storage-provisioner [4bf3c6abead9] ...
	I1204 15:40:32.332977   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bf3c6abead9"
	I1204 15:40:32.344593   10206 logs.go:123] Gathering logs for kube-scheduler [1f92512b0d23] ...
	I1204 15:40:32.344604   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f92512b0d23"
	I1204 15:40:32.355970   10206 logs.go:123] Gathering logs for kube-apiserver [1d1ae4543cd6] ...
	I1204 15:40:32.355983   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d1ae4543cd6"
	I1204 15:40:32.404514   10206 logs.go:123] Gathering logs for storage-provisioner [4a54859eb1e5] ...
	I1204 15:40:32.404524   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a54859eb1e5"
	I1204 15:40:32.416474   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:40:32.416486   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:40:32.428483   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:40:32.428493   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:40:32.466557   10206 logs.go:123] Gathering logs for kube-apiserver [bc235f7c7828] ...
	I1204 15:40:32.466567   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc235f7c7828"
	I1204 15:40:32.480595   10206 logs.go:123] Gathering logs for etcd [fc48923b85fc] ...
	I1204 15:40:32.480607   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc48923b85fc"
	I1204 15:40:32.494656   10206 logs.go:123] Gathering logs for etcd [4f1790594676] ...
	I1204 15:40:32.494665   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f1790594676"
	I1204 15:40:32.508996   10206 logs.go:123] Gathering logs for coredns [467f47d9f689] ...
	I1204 15:40:32.509009   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467f47d9f689"
	I1204 15:40:32.520411   10206 logs.go:123] Gathering logs for kube-scheduler [b0ad1b935d01] ...
	I1204 15:40:32.520421   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0ad1b935d01"
	I1204 15:40:32.536901   10206 logs.go:123] Gathering logs for kube-controller-manager [4469186baff0] ...
	I1204 15:40:32.536912   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4469186baff0"
	I1204 15:40:32.554249   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:40:32.554259   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:40:32.558350   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:40:32.558356   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:40:32.580969   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:40:32.580979   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:40:35.119051   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:40:36.108243    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:40:36.108514    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:40:36.134302    9926 logs.go:282] 1 containers: [f3328f94ed0d]
	I1204 15:40:36.134446    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:40:36.150860    9926 logs.go:282] 1 containers: [17b0ed658f6c]
	I1204 15:40:36.150968    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:40:36.165725    9926 logs.go:282] 2 containers: [0a3178099d31 c8ddfa007847]
	I1204 15:40:36.165806    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:40:36.176885    9926 logs.go:282] 1 containers: [a82022fd2242]
	I1204 15:40:36.176969    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:40:36.187754    9926 logs.go:282] 1 containers: [8f7d833df5c3]
	I1204 15:40:36.187833    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:40:36.198768    9926 logs.go:282] 1 containers: [bdf1070f199f]
	I1204 15:40:36.198849    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:40:36.210438    9926 logs.go:282] 0 containers: []
	W1204 15:40:36.210449    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:40:36.210521    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:40:36.220934    9926 logs.go:282] 1 containers: [bdb47901ddb7]
	I1204 15:40:36.220952    9926 logs.go:123] Gathering logs for kube-scheduler [a82022fd2242] ...
	I1204 15:40:36.220959    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a82022fd2242"
	I1204 15:40:36.236909    9926 logs.go:123] Gathering logs for kube-proxy [8f7d833df5c3] ...
	I1204 15:40:36.236921    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d833df5c3"
	I1204 15:40:36.250190    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:40:36.250203    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:40:36.288586    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:40:36.288604    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:40:36.294045    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:40:36.294054    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:40:36.328139    9926 logs.go:123] Gathering logs for kube-apiserver [f3328f94ed0d] ...
	I1204 15:40:36.328152    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3328f94ed0d"
	I1204 15:40:36.348505    9926 logs.go:123] Gathering logs for etcd [17b0ed658f6c] ...
	I1204 15:40:36.348517    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b0ed658f6c"
	I1204 15:40:36.375147    9926 logs.go:123] Gathering logs for coredns [0a3178099d31] ...
	I1204 15:40:36.375159    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a3178099d31"
	I1204 15:40:36.390618    9926 logs.go:123] Gathering logs for kube-controller-manager [bdf1070f199f] ...
	I1204 15:40:36.390631    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdf1070f199f"
	I1204 15:40:36.409488    9926 logs.go:123] Gathering logs for coredns [c8ddfa007847] ...
	I1204 15:40:36.409503    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8ddfa007847"
	I1204 15:40:36.421126    9926 logs.go:123] Gathering logs for storage-provisioner [bdb47901ddb7] ...
	I1204 15:40:36.421137    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb47901ddb7"
	I1204 15:40:36.432574    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:40:36.432584    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:40:36.457856    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:40:36.457866    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:40:40.121563   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:40:40.121771   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:40:40.148592   10206 logs.go:282] 2 containers: [bc235f7c7828 1d1ae4543cd6]
	I1204 15:40:40.148725   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:40:40.170784   10206 logs.go:282] 2 containers: [fc48923b85fc 4f1790594676]
	I1204 15:40:40.170884   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:40:40.183166   10206 logs.go:282] 1 containers: [467f47d9f689]
	I1204 15:40:40.183248   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:40:40.196430   10206 logs.go:282] 2 containers: [1f92512b0d23 b0ad1b935d01]
	I1204 15:40:40.196506   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:40:40.206930   10206 logs.go:282] 1 containers: [e290e266aab8]
	I1204 15:40:40.207022   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:40:40.219611   10206 logs.go:282] 2 containers: [4469186baff0 63473edefa8f]
	I1204 15:40:40.219689   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:40:40.229847   10206 logs.go:282] 0 containers: []
	W1204 15:40:40.229884   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:40:40.229949   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:40:40.243850   10206 logs.go:282] 2 containers: [4bf3c6abead9 4a54859eb1e5]
	I1204 15:40:40.243866   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:40:40.243871   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:40:40.282291   10206 logs.go:123] Gathering logs for etcd [4f1790594676] ...
	I1204 15:40:40.282301   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f1790594676"
	I1204 15:40:40.297904   10206 logs.go:123] Gathering logs for coredns [467f47d9f689] ...
	I1204 15:40:40.297914   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467f47d9f689"
	I1204 15:40:40.309880   10206 logs.go:123] Gathering logs for kube-controller-manager [4469186baff0] ...
	I1204 15:40:40.309893   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4469186baff0"
	I1204 15:40:40.328593   10206 logs.go:123] Gathering logs for kube-controller-manager [63473edefa8f] ...
	I1204 15:40:40.328607   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63473edefa8f"
	I1204 15:40:40.341506   10206 logs.go:123] Gathering logs for storage-provisioner [4a54859eb1e5] ...
	I1204 15:40:40.341518   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a54859eb1e5"
	I1204 15:40:40.353369   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:40:40.353381   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:40:40.365161   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:40:40.365171   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:40:40.369766   10206 logs.go:123] Gathering logs for kube-apiserver [bc235f7c7828] ...
	I1204 15:40:40.369771   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc235f7c7828"
	I1204 15:40:40.390388   10206 logs.go:123] Gathering logs for kube-scheduler [1f92512b0d23] ...
	I1204 15:40:40.390399   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f92512b0d23"
	I1204 15:40:40.406423   10206 logs.go:123] Gathering logs for kube-scheduler [b0ad1b935d01] ...
	I1204 15:40:40.406437   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0ad1b935d01"
	I1204 15:40:40.421764   10206 logs.go:123] Gathering logs for storage-provisioner [4bf3c6abead9] ...
	I1204 15:40:40.421775   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bf3c6abead9"
	I1204 15:40:40.433834   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:40:40.433844   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:40:40.458451   10206 logs.go:123] Gathering logs for kube-apiserver [1d1ae4543cd6] ...
	I1204 15:40:40.458459   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d1ae4543cd6"
	I1204 15:40:40.496002   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:40:40.496013   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:40:40.530671   10206 logs.go:123] Gathering logs for etcd [fc48923b85fc] ...
	I1204 15:40:40.530682   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc48923b85fc"
	I1204 15:40:40.544772   10206 logs.go:123] Gathering logs for kube-proxy [e290e266aab8] ...
	I1204 15:40:40.544784   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e290e266aab8"
	I1204 15:40:38.971990    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:40:43.058763   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:40:43.974535    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:40:43.974803    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:40:44.001190    9926 logs.go:282] 1 containers: [f3328f94ed0d]
	I1204 15:40:44.001306    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:40:44.018839    9926 logs.go:282] 1 containers: [17b0ed658f6c]
	I1204 15:40:44.018939    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:40:44.039495    9926 logs.go:282] 2 containers: [0a3178099d31 c8ddfa007847]
	I1204 15:40:44.039580    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:40:44.051001    9926 logs.go:282] 1 containers: [a82022fd2242]
	I1204 15:40:44.051087    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:40:44.062837    9926 logs.go:282] 1 containers: [8f7d833df5c3]
	I1204 15:40:44.062923    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:40:44.074147    9926 logs.go:282] 1 containers: [bdf1070f199f]
	I1204 15:40:44.074226    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:40:44.084467    9926 logs.go:282] 0 containers: []
	W1204 15:40:44.084478    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:40:44.084546    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:40:44.095862    9926 logs.go:282] 1 containers: [bdb47901ddb7]
	I1204 15:40:44.095880    9926 logs.go:123] Gathering logs for kube-proxy [8f7d833df5c3] ...
	I1204 15:40:44.095887    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d833df5c3"
	I1204 15:40:44.107713    9926 logs.go:123] Gathering logs for kube-controller-manager [bdf1070f199f] ...
	I1204 15:40:44.107724    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdf1070f199f"
	I1204 15:40:44.125998    9926 logs.go:123] Gathering logs for storage-provisioner [bdb47901ddb7] ...
	I1204 15:40:44.126009    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb47901ddb7"
	I1204 15:40:44.139290    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:40:44.139300    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:40:44.163204    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:40:44.163216    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:40:44.175033    9926 logs.go:123] Gathering logs for kube-apiserver [f3328f94ed0d] ...
	I1204 15:40:44.175047    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3328f94ed0d"
	I1204 15:40:44.189260    9926 logs.go:123] Gathering logs for coredns [0a3178099d31] ...
	I1204 15:40:44.189270    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a3178099d31"
	I1204 15:40:44.201436    9926 logs.go:123] Gathering logs for kube-scheduler [a82022fd2242] ...
	I1204 15:40:44.201448    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a82022fd2242"
	I1204 15:40:44.216731    9926 logs.go:123] Gathering logs for etcd [17b0ed658f6c] ...
	I1204 15:40:44.216743    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b0ed658f6c"
	I1204 15:40:44.230553    9926 logs.go:123] Gathering logs for coredns [c8ddfa007847] ...
	I1204 15:40:44.230567    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8ddfa007847"
	I1204 15:40:44.242484    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:40:44.242494    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:40:44.280636    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:40:44.280652    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:40:44.285666    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:40:44.285676    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:40:48.061518   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:40:48.061765   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:40:48.079209   10206 logs.go:282] 2 containers: [bc235f7c7828 1d1ae4543cd6]
	I1204 15:40:48.079311   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:40:48.092216   10206 logs.go:282] 2 containers: [fc48923b85fc 4f1790594676]
	I1204 15:40:48.092298   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:40:48.105436   10206 logs.go:282] 1 containers: [467f47d9f689]
	I1204 15:40:48.105510   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:40:48.116124   10206 logs.go:282] 2 containers: [1f92512b0d23 b0ad1b935d01]
	I1204 15:40:48.116208   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:40:48.126936   10206 logs.go:282] 1 containers: [e290e266aab8]
	I1204 15:40:48.127012   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:40:48.137487   10206 logs.go:282] 2 containers: [4469186baff0 63473edefa8f]
	I1204 15:40:48.137580   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:40:48.151326   10206 logs.go:282] 0 containers: []
	W1204 15:40:48.151338   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:40:48.151401   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:40:48.162105   10206 logs.go:282] 2 containers: [4bf3c6abead9 4a54859eb1e5]
	I1204 15:40:48.162124   10206 logs.go:123] Gathering logs for storage-provisioner [4a54859eb1e5] ...
	I1204 15:40:48.162128   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a54859eb1e5"
	I1204 15:40:48.173549   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:40:48.173560   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:40:48.211307   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:40:48.211319   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:40:48.246158   10206 logs.go:123] Gathering logs for etcd [fc48923b85fc] ...
	I1204 15:40:48.246172   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc48923b85fc"
	I1204 15:40:48.260318   10206 logs.go:123] Gathering logs for coredns [467f47d9f689] ...
	I1204 15:40:48.260330   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467f47d9f689"
	I1204 15:40:48.271391   10206 logs.go:123] Gathering logs for kube-proxy [e290e266aab8] ...
	I1204 15:40:48.271404   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e290e266aab8"
	I1204 15:40:48.286652   10206 logs.go:123] Gathering logs for storage-provisioner [4bf3c6abead9] ...
	I1204 15:40:48.286661   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bf3c6abead9"
	I1204 15:40:48.298901   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:40:48.298911   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:40:48.322014   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:40:48.322024   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:40:48.326444   10206 logs.go:123] Gathering logs for kube-apiserver [bc235f7c7828] ...
	I1204 15:40:48.326451   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc235f7c7828"
	I1204 15:40:48.341300   10206 logs.go:123] Gathering logs for kube-apiserver [1d1ae4543cd6] ...
	I1204 15:40:48.341312   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d1ae4543cd6"
	I1204 15:40:48.380512   10206 logs.go:123] Gathering logs for etcd [4f1790594676] ...
	I1204 15:40:48.380523   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f1790594676"
	I1204 15:40:48.394480   10206 logs.go:123] Gathering logs for kube-scheduler [b0ad1b935d01] ...
	I1204 15:40:48.394493   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0ad1b935d01"
	I1204 15:40:48.409577   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:40:48.409586   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:40:48.421697   10206 logs.go:123] Gathering logs for kube-scheduler [1f92512b0d23] ...
	I1204 15:40:48.421708   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f92512b0d23"
	I1204 15:40:48.434104   10206 logs.go:123] Gathering logs for kube-controller-manager [4469186baff0] ...
	I1204 15:40:48.434114   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4469186baff0"
	I1204 15:40:48.455743   10206 logs.go:123] Gathering logs for kube-controller-manager [63473edefa8f] ...
	I1204 15:40:48.455755   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63473edefa8f"
	I1204 15:40:50.971411   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:40:46.825192    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:40:55.973879   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:40:55.974138   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:40:56.009503   10206 logs.go:282] 2 containers: [bc235f7c7828 1d1ae4543cd6]
	I1204 15:40:56.009617   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:40:56.025920   10206 logs.go:282] 2 containers: [fc48923b85fc 4f1790594676]
	I1204 15:40:56.026009   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:40:56.038550   10206 logs.go:282] 1 containers: [467f47d9f689]
	I1204 15:40:56.038623   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:40:56.049309   10206 logs.go:282] 2 containers: [1f92512b0d23 b0ad1b935d01]
	I1204 15:40:56.049394   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:40:56.059526   10206 logs.go:282] 1 containers: [e290e266aab8]
	I1204 15:40:56.059604   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:40:56.070125   10206 logs.go:282] 2 containers: [4469186baff0 63473edefa8f]
	I1204 15:40:56.070204   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:40:56.080445   10206 logs.go:282] 0 containers: []
	W1204 15:40:56.080458   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:40:56.080526   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:40:56.091036   10206 logs.go:282] 2 containers: [4bf3c6abead9 4a54859eb1e5]
	I1204 15:40:56.091054   10206 logs.go:123] Gathering logs for storage-provisioner [4a54859eb1e5] ...
	I1204 15:40:56.091059   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a54859eb1e5"
	I1204 15:40:56.102520   10206 logs.go:123] Gathering logs for kube-apiserver [bc235f7c7828] ...
	I1204 15:40:56.102531   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc235f7c7828"
	I1204 15:40:56.116475   10206 logs.go:123] Gathering logs for kube-apiserver [1d1ae4543cd6] ...
	I1204 15:40:56.116486   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d1ae4543cd6"
	I1204 15:40:56.155582   10206 logs.go:123] Gathering logs for etcd [fc48923b85fc] ...
	I1204 15:40:56.155594   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc48923b85fc"
	I1204 15:40:56.170185   10206 logs.go:123] Gathering logs for coredns [467f47d9f689] ...
	I1204 15:40:56.170198   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467f47d9f689"
	I1204 15:40:56.184430   10206 logs.go:123] Gathering logs for kube-proxy [e290e266aab8] ...
	I1204 15:40:56.184440   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e290e266aab8"
	I1204 15:40:56.197582   10206 logs.go:123] Gathering logs for kube-controller-manager [4469186baff0] ...
	I1204 15:40:56.197593   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4469186baff0"
	I1204 15:40:56.215256   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:40:56.215269   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:40:56.219550   10206 logs.go:123] Gathering logs for etcd [4f1790594676] ...
	I1204 15:40:56.219559   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f1790594676"
	I1204 15:40:56.234454   10206 logs.go:123] Gathering logs for kube-scheduler [1f92512b0d23] ...
	I1204 15:40:56.234464   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f92512b0d23"
	I1204 15:40:56.246281   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:40:56.246294   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:40:56.283210   10206 logs.go:123] Gathering logs for kube-scheduler [b0ad1b935d01] ...
	I1204 15:40:56.283219   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0ad1b935d01"
	I1204 15:40:56.298744   10206 logs.go:123] Gathering logs for kube-controller-manager [63473edefa8f] ...
	I1204 15:40:56.298759   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63473edefa8f"
	I1204 15:40:56.311180   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:40:56.311190   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:40:56.327022   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:40:56.327034   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:40:56.364034   10206 logs.go:123] Gathering logs for storage-provisioner [4bf3c6abead9] ...
	I1204 15:40:56.364045   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bf3c6abead9"
	I1204 15:40:56.375945   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:40:56.375956   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:40:51.827452    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:40:51.827606    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:40:51.843378    9926 logs.go:282] 1 containers: [f3328f94ed0d]
	I1204 15:40:51.843479    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:40:51.855877    9926 logs.go:282] 1 containers: [17b0ed658f6c]
	I1204 15:40:51.855964    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:40:51.866360    9926 logs.go:282] 2 containers: [0a3178099d31 c8ddfa007847]
	I1204 15:40:51.866440    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:40:51.877040    9926 logs.go:282] 1 containers: [a82022fd2242]
	I1204 15:40:51.877115    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:40:51.887124    9926 logs.go:282] 1 containers: [8f7d833df5c3]
	I1204 15:40:51.887205    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:40:51.897925    9926 logs.go:282] 1 containers: [bdf1070f199f]
	I1204 15:40:51.898006    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:40:51.908306    9926 logs.go:282] 0 containers: []
	W1204 15:40:51.908320    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:40:51.908385    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:40:51.919628    9926 logs.go:282] 1 containers: [bdb47901ddb7]
	I1204 15:40:51.919644    9926 logs.go:123] Gathering logs for kube-proxy [8f7d833df5c3] ...
	I1204 15:40:51.919650    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d833df5c3"
	I1204 15:40:51.932401    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:40:51.932412    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:40:51.937579    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:40:51.937585    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:40:51.978703    9926 logs.go:123] Gathering logs for kube-apiserver [f3328f94ed0d] ...
	I1204 15:40:51.978718    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3328f94ed0d"
	I1204 15:40:51.997153    9926 logs.go:123] Gathering logs for etcd [17b0ed658f6c] ...
	I1204 15:40:51.997167    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b0ed658f6c"
	I1204 15:40:52.010681    9926 logs.go:123] Gathering logs for coredns [0a3178099d31] ...
	I1204 15:40:52.010694    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a3178099d31"
	I1204 15:40:52.022642    9926 logs.go:123] Gathering logs for coredns [c8ddfa007847] ...
	I1204 15:40:52.022655    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8ddfa007847"
	I1204 15:40:52.034291    9926 logs.go:123] Gathering logs for kube-scheduler [a82022fd2242] ...
	I1204 15:40:52.034306    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a82022fd2242"
	I1204 15:40:52.049254    9926 logs.go:123] Gathering logs for kube-controller-manager [bdf1070f199f] ...
	I1204 15:40:52.049264    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdf1070f199f"
	I1204 15:40:52.067078    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:40:52.067088    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:40:52.078487    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:40:52.078501    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:40:52.117054    9926 logs.go:123] Gathering logs for storage-provisioner [bdb47901ddb7] ...
	I1204 15:40:52.117064    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb47901ddb7"
	I1204 15:40:52.132727    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:40:52.132737    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:40:54.659590    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:40:58.901458   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:40:59.661920    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:40:59.662032    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:40:59.674746    9926 logs.go:282] 1 containers: [f3328f94ed0d]
	I1204 15:40:59.674838    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:40:59.685364    9926 logs.go:282] 1 containers: [17b0ed658f6c]
	I1204 15:40:59.685448    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:40:59.696101    9926 logs.go:282] 2 containers: [0a3178099d31 c8ddfa007847]
	I1204 15:40:59.696181    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:40:59.707139    9926 logs.go:282] 1 containers: [a82022fd2242]
	I1204 15:40:59.707220    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:40:59.717397    9926 logs.go:282] 1 containers: [8f7d833df5c3]
	I1204 15:40:59.717479    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:40:59.728021    9926 logs.go:282] 1 containers: [bdf1070f199f]
	I1204 15:40:59.728098    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:40:59.738520    9926 logs.go:282] 0 containers: []
	W1204 15:40:59.738533    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:40:59.738598    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:40:59.752117    9926 logs.go:282] 1 containers: [bdb47901ddb7]
	I1204 15:40:59.752138    9926 logs.go:123] Gathering logs for coredns [0a3178099d31] ...
	I1204 15:40:59.752144    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a3178099d31"
	I1204 15:40:59.763744    9926 logs.go:123] Gathering logs for coredns [c8ddfa007847] ...
	I1204 15:40:59.763754    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8ddfa007847"
	I1204 15:40:59.776007    9926 logs.go:123] Gathering logs for kube-controller-manager [bdf1070f199f] ...
	I1204 15:40:59.776018    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdf1070f199f"
	I1204 15:40:59.798196    9926 logs.go:123] Gathering logs for storage-provisioner [bdb47901ddb7] ...
	I1204 15:40:59.798205    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb47901ddb7"
	I1204 15:40:59.812662    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:40:59.812677    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:40:59.824671    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:40:59.824682    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:40:59.859401    9926 logs.go:123] Gathering logs for kube-apiserver [f3328f94ed0d] ...
	I1204 15:40:59.859416    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3328f94ed0d"
	I1204 15:40:59.874034    9926 logs.go:123] Gathering logs for etcd [17b0ed658f6c] ...
	I1204 15:40:59.874045    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b0ed658f6c"
	I1204 15:40:59.888970    9926 logs.go:123] Gathering logs for kube-scheduler [a82022fd2242] ...
	I1204 15:40:59.888986    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a82022fd2242"
	I1204 15:40:59.904591    9926 logs.go:123] Gathering logs for kube-proxy [8f7d833df5c3] ...
	I1204 15:40:59.904605    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d833df5c3"
	I1204 15:40:59.916396    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:40:59.916408    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:40:59.942081    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:40:59.942089    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:40:59.979634    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:40:59.979657    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:41:03.903875   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:41:03.904128   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:41:03.932347   10206 logs.go:282] 2 containers: [bc235f7c7828 1d1ae4543cd6]
	I1204 15:41:03.932471   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:41:03.950474   10206 logs.go:282] 2 containers: [fc48923b85fc 4f1790594676]
	I1204 15:41:03.950571   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:41:03.964100   10206 logs.go:282] 1 containers: [467f47d9f689]
	I1204 15:41:03.964171   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:41:03.976453   10206 logs.go:282] 2 containers: [1f92512b0d23 b0ad1b935d01]
	I1204 15:41:03.976535   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:41:03.986968   10206 logs.go:282] 1 containers: [e290e266aab8]
	I1204 15:41:03.987039   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:41:03.998885   10206 logs.go:282] 2 containers: [4469186baff0 63473edefa8f]
	I1204 15:41:03.998963   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:41:04.013564   10206 logs.go:282] 0 containers: []
	W1204 15:41:04.013580   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:41:04.013647   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:41:04.024222   10206 logs.go:282] 2 containers: [4bf3c6abead9 4a54859eb1e5]
	I1204 15:41:04.024241   10206 logs.go:123] Gathering logs for etcd [4f1790594676] ...
	I1204 15:41:04.024246   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f1790594676"
	I1204 15:41:04.039808   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:41:04.039821   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:41:04.076335   10206 logs.go:123] Gathering logs for kube-proxy [e290e266aab8] ...
	I1204 15:41:04.076347   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e290e266aab8"
	I1204 15:41:04.088598   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:41:04.088611   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:41:04.112116   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:41:04.112128   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:41:04.124419   10206 logs.go:123] Gathering logs for kube-controller-manager [63473edefa8f] ...
	I1204 15:41:04.124430   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63473edefa8f"
	I1204 15:41:04.136600   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:41:04.136610   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:41:04.173909   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:41:04.173917   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:41:04.178260   10206 logs.go:123] Gathering logs for kube-apiserver [bc235f7c7828] ...
	I1204 15:41:04.178269   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc235f7c7828"
	I1204 15:41:04.192034   10206 logs.go:123] Gathering logs for kube-apiserver [1d1ae4543cd6] ...
	I1204 15:41:04.192044   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d1ae4543cd6"
	I1204 15:41:04.231430   10206 logs.go:123] Gathering logs for coredns [467f47d9f689] ...
	I1204 15:41:04.231440   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467f47d9f689"
	I1204 15:41:04.242611   10206 logs.go:123] Gathering logs for kube-scheduler [1f92512b0d23] ...
	I1204 15:41:04.242623   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f92512b0d23"
	I1204 15:41:04.254247   10206 logs.go:123] Gathering logs for etcd [fc48923b85fc] ...
	I1204 15:41:04.254261   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc48923b85fc"
	I1204 15:41:04.267893   10206 logs.go:123] Gathering logs for kube-scheduler [b0ad1b935d01] ...
	I1204 15:41:04.267906   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0ad1b935d01"
	I1204 15:41:04.283177   10206 logs.go:123] Gathering logs for kube-controller-manager [4469186baff0] ...
	I1204 15:41:04.283190   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4469186baff0"
	I1204 15:41:04.306921   10206 logs.go:123] Gathering logs for storage-provisioner [4bf3c6abead9] ...
	I1204 15:41:04.306931   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bf3c6abead9"
	I1204 15:41:04.323758   10206 logs.go:123] Gathering logs for storage-provisioner [4a54859eb1e5] ...
	I1204 15:41:04.323771   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a54859eb1e5"
	I1204 15:41:02.485920    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:41:06.837076   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:41:07.488221    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:41:07.488494    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:41:07.513011    9926 logs.go:282] 1 containers: [f3328f94ed0d]
	I1204 15:41:07.513138    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:41:07.529007    9926 logs.go:282] 1 containers: [17b0ed658f6c]
	I1204 15:41:07.529084    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:41:07.545775    9926 logs.go:282] 4 containers: [11d10ac44548 722dc28f5a86 0a3178099d31 c8ddfa007847]
	I1204 15:41:07.545845    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:41:07.559359    9926 logs.go:282] 1 containers: [a82022fd2242]
	I1204 15:41:07.559430    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:41:07.570570    9926 logs.go:282] 1 containers: [8f7d833df5c3]
	I1204 15:41:07.570637    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:41:07.581042    9926 logs.go:282] 1 containers: [bdf1070f199f]
	I1204 15:41:07.581105    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:41:07.591201    9926 logs.go:282] 0 containers: []
	W1204 15:41:07.591221    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:41:07.591279    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:41:07.602303    9926 logs.go:282] 1 containers: [bdb47901ddb7]
	I1204 15:41:07.602321    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:41:07.602327    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:41:07.637660    9926 logs.go:123] Gathering logs for etcd [17b0ed658f6c] ...
	I1204 15:41:07.637671    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b0ed658f6c"
	I1204 15:41:07.651713    9926 logs.go:123] Gathering logs for storage-provisioner [bdb47901ddb7] ...
	I1204 15:41:07.651723    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb47901ddb7"
	I1204 15:41:07.663554    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:41:07.663565    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:41:07.676756    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:41:07.676766    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:41:07.715499    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:41:07.715510    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:41:07.720064    9926 logs.go:123] Gathering logs for kube-apiserver [f3328f94ed0d] ...
	I1204 15:41:07.720075    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3328f94ed0d"
	I1204 15:41:07.734317    9926 logs.go:123] Gathering logs for coredns [722dc28f5a86] ...
	I1204 15:41:07.734329    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 722dc28f5a86"
	I1204 15:41:07.745437    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:41:07.745446    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:41:07.768810    9926 logs.go:123] Gathering logs for coredns [11d10ac44548] ...
	I1204 15:41:07.768821    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d10ac44548"
	I1204 15:41:07.779827    9926 logs.go:123] Gathering logs for coredns [c8ddfa007847] ...
	I1204 15:41:07.779838    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8ddfa007847"
	I1204 15:41:07.791712    9926 logs.go:123] Gathering logs for kube-scheduler [a82022fd2242] ...
	I1204 15:41:07.791724    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a82022fd2242"
	I1204 15:41:07.807923    9926 logs.go:123] Gathering logs for kube-proxy [8f7d833df5c3] ...
	I1204 15:41:07.807934    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d833df5c3"
	I1204 15:41:07.819804    9926 logs.go:123] Gathering logs for coredns [0a3178099d31] ...
	I1204 15:41:07.819817    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a3178099d31"
	I1204 15:41:07.832243    9926 logs.go:123] Gathering logs for kube-controller-manager [bdf1070f199f] ...
	I1204 15:41:07.832253    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdf1070f199f"
	I1204 15:41:10.352635    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:41:11.839439   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:41:11.839653   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:41:11.860842   10206 logs.go:282] 2 containers: [bc235f7c7828 1d1ae4543cd6]
	I1204 15:41:11.860951   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:41:11.875923   10206 logs.go:282] 2 containers: [fc48923b85fc 4f1790594676]
	I1204 15:41:11.876016   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:41:11.888157   10206 logs.go:282] 1 containers: [467f47d9f689]
	I1204 15:41:11.888238   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:41:11.899595   10206 logs.go:282] 2 containers: [1f92512b0d23 b0ad1b935d01]
	I1204 15:41:11.899677   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:41:11.910289   10206 logs.go:282] 1 containers: [e290e266aab8]
	I1204 15:41:11.910362   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:41:11.920782   10206 logs.go:282] 2 containers: [4469186baff0 63473edefa8f]
	I1204 15:41:11.920862   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:41:11.930814   10206 logs.go:282] 0 containers: []
	W1204 15:41:11.930824   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:41:11.930881   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:41:11.941242   10206 logs.go:282] 2 containers: [4bf3c6abead9 4a54859eb1e5]
	I1204 15:41:11.941268   10206 logs.go:123] Gathering logs for etcd [fc48923b85fc] ...
	I1204 15:41:11.941274   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc48923b85fc"
	I1204 15:41:11.955336   10206 logs.go:123] Gathering logs for kube-scheduler [b0ad1b935d01] ...
	I1204 15:41:11.955346   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0ad1b935d01"
	I1204 15:41:11.970691   10206 logs.go:123] Gathering logs for kube-controller-manager [4469186baff0] ...
	I1204 15:41:11.970701   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4469186baff0"
	I1204 15:41:11.989516   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:41:11.989536   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:41:12.014256   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:41:12.014267   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:41:12.053787   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:41:12.053796   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:41:12.058467   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:41:12.058476   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:41:12.092499   10206 logs.go:123] Gathering logs for kube-apiserver [bc235f7c7828] ...
	I1204 15:41:12.092509   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc235f7c7828"
	I1204 15:41:12.106904   10206 logs.go:123] Gathering logs for kube-apiserver [1d1ae4543cd6] ...
	I1204 15:41:12.106915   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d1ae4543cd6"
	I1204 15:41:12.146397   10206 logs.go:123] Gathering logs for coredns [467f47d9f689] ...
	I1204 15:41:12.146410   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467f47d9f689"
	I1204 15:41:12.164469   10206 logs.go:123] Gathering logs for storage-provisioner [4bf3c6abead9] ...
	I1204 15:41:12.164483   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bf3c6abead9"
	I1204 15:41:12.175727   10206 logs.go:123] Gathering logs for kube-scheduler [1f92512b0d23] ...
	I1204 15:41:12.175741   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f92512b0d23"
	I1204 15:41:12.187731   10206 logs.go:123] Gathering logs for kube-proxy [e290e266aab8] ...
	I1204 15:41:12.187741   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e290e266aab8"
	I1204 15:41:12.199472   10206 logs.go:123] Gathering logs for kube-controller-manager [63473edefa8f] ...
	I1204 15:41:12.199485   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63473edefa8f"
	I1204 15:41:12.214817   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:41:12.214827   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:41:12.227777   10206 logs.go:123] Gathering logs for etcd [4f1790594676] ...
	I1204 15:41:12.227788   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f1790594676"
	I1204 15:41:12.245935   10206 logs.go:123] Gathering logs for storage-provisioner [4a54859eb1e5] ...
	I1204 15:41:12.245948   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a54859eb1e5"
	I1204 15:41:14.759368   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:41:15.355277    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:41:15.355684    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:41:15.387166    9926 logs.go:282] 1 containers: [f3328f94ed0d]
	I1204 15:41:15.387322    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:41:15.407099    9926 logs.go:282] 1 containers: [17b0ed658f6c]
	I1204 15:41:15.407201    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:41:15.421122    9926 logs.go:282] 4 containers: [11d10ac44548 722dc28f5a86 0a3178099d31 c8ddfa007847]
	I1204 15:41:15.421217    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:41:15.432700    9926 logs.go:282] 1 containers: [a82022fd2242]
	I1204 15:41:15.432778    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:41:15.443965    9926 logs.go:282] 1 containers: [8f7d833df5c3]
	I1204 15:41:15.444045    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:41:15.454319    9926 logs.go:282] 1 containers: [bdf1070f199f]
	I1204 15:41:15.454398    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:41:15.464189    9926 logs.go:282] 0 containers: []
	W1204 15:41:15.464204    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:41:15.464278    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:41:15.474566    9926 logs.go:282] 1 containers: [bdb47901ddb7]
	I1204 15:41:15.474584    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:41:15.474589    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:41:15.498051    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:41:15.498064    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:41:15.502437    9926 logs.go:123] Gathering logs for coredns [0a3178099d31] ...
	I1204 15:41:15.502443    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a3178099d31"
	I1204 15:41:15.513730    9926 logs.go:123] Gathering logs for coredns [c8ddfa007847] ...
	I1204 15:41:15.513744    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8ddfa007847"
	I1204 15:41:15.526358    9926 logs.go:123] Gathering logs for kube-controller-manager [bdf1070f199f] ...
	I1204 15:41:15.526373    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdf1070f199f"
	I1204 15:41:15.544374    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:41:15.544385    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:41:15.583535    9926 logs.go:123] Gathering logs for coredns [722dc28f5a86] ...
	I1204 15:41:15.583547    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 722dc28f5a86"
	I1204 15:41:15.596050    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:41:15.596061    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:41:15.621198    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:41:15.621210    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:41:15.658817    9926 logs.go:123] Gathering logs for coredns [11d10ac44548] ...
	I1204 15:41:15.658831    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d10ac44548"
	I1204 15:41:15.671064    9926 logs.go:123] Gathering logs for storage-provisioner [bdb47901ddb7] ...
	I1204 15:41:15.671078    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb47901ddb7"
	I1204 15:41:15.688240    9926 logs.go:123] Gathering logs for kube-apiserver [f3328f94ed0d] ...
	I1204 15:41:15.688255    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3328f94ed0d"
	I1204 15:41:15.702515    9926 logs.go:123] Gathering logs for etcd [17b0ed658f6c] ...
	I1204 15:41:15.702527    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b0ed658f6c"
	I1204 15:41:15.716368    9926 logs.go:123] Gathering logs for kube-scheduler [a82022fd2242] ...
	I1204 15:41:15.716382    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a82022fd2242"
	I1204 15:41:15.732224    9926 logs.go:123] Gathering logs for kube-proxy [8f7d833df5c3] ...
	I1204 15:41:15.732236    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d833df5c3"
	I1204 15:41:19.761785   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:41:19.762055   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:41:19.789441   10206 logs.go:282] 2 containers: [bc235f7c7828 1d1ae4543cd6]
	I1204 15:41:19.789570   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:41:19.806013   10206 logs.go:282] 2 containers: [fc48923b85fc 4f1790594676]
	I1204 15:41:19.806111   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:41:19.819318   10206 logs.go:282] 1 containers: [467f47d9f689]
	I1204 15:41:19.819401   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:41:19.839752   10206 logs.go:282] 2 containers: [1f92512b0d23 b0ad1b935d01]
	I1204 15:41:19.839832   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:41:19.850803   10206 logs.go:282] 1 containers: [e290e266aab8]
	I1204 15:41:19.850879   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:41:19.867852   10206 logs.go:282] 2 containers: [4469186baff0 63473edefa8f]
	I1204 15:41:19.867923   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:41:19.878228   10206 logs.go:282] 0 containers: []
	W1204 15:41:19.878241   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:41:19.878301   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:41:19.888782   10206 logs.go:282] 2 containers: [4bf3c6abead9 4a54859eb1e5]
	I1204 15:41:19.888800   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:41:19.888806   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:41:19.893678   10206 logs.go:123] Gathering logs for kube-apiserver [1d1ae4543cd6] ...
	I1204 15:41:19.893684   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d1ae4543cd6"
	I1204 15:41:19.932569   10206 logs.go:123] Gathering logs for etcd [fc48923b85fc] ...
	I1204 15:41:19.932581   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc48923b85fc"
	I1204 15:41:19.946304   10206 logs.go:123] Gathering logs for etcd [4f1790594676] ...
	I1204 15:41:19.946314   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f1790594676"
	I1204 15:41:19.960840   10206 logs.go:123] Gathering logs for kube-scheduler [1f92512b0d23] ...
	I1204 15:41:19.960852   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f92512b0d23"
	I1204 15:41:19.972521   10206 logs.go:123] Gathering logs for kube-scheduler [b0ad1b935d01] ...
	I1204 15:41:19.972533   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0ad1b935d01"
	I1204 15:41:19.992248   10206 logs.go:123] Gathering logs for kube-controller-manager [63473edefa8f] ...
	I1204 15:41:19.992260   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63473edefa8f"
	I1204 15:41:20.004704   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:41:20.004717   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:41:20.027448   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:41:20.027457   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:41:20.062516   10206 logs.go:123] Gathering logs for kube-proxy [e290e266aab8] ...
	I1204 15:41:20.062526   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e290e266aab8"
	I1204 15:41:20.074818   10206 logs.go:123] Gathering logs for storage-provisioner [4bf3c6abead9] ...
	I1204 15:41:20.074827   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bf3c6abead9"
	I1204 15:41:20.086706   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:41:20.086715   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:41:20.124592   10206 logs.go:123] Gathering logs for kube-apiserver [bc235f7c7828] ...
	I1204 15:41:20.124602   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc235f7c7828"
	I1204 15:41:20.138194   10206 logs.go:123] Gathering logs for coredns [467f47d9f689] ...
	I1204 15:41:20.138210   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467f47d9f689"
	I1204 15:41:20.149771   10206 logs.go:123] Gathering logs for storage-provisioner [4a54859eb1e5] ...
	I1204 15:41:20.149784   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a54859eb1e5"
	I1204 15:41:20.172433   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:41:20.172445   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:41:20.185399   10206 logs.go:123] Gathering logs for kube-controller-manager [4469186baff0] ...
	I1204 15:41:20.185409   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4469186baff0"
	I1204 15:41:18.246384    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:41:22.710798   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:41:23.248476    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:41:23.248696    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:41:23.269511    9926 logs.go:282] 1 containers: [f3328f94ed0d]
	I1204 15:41:23.269622    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:41:23.284477    9926 logs.go:282] 1 containers: [17b0ed658f6c]
	I1204 15:41:23.284567    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:41:23.299116    9926 logs.go:282] 4 containers: [11d10ac44548 722dc28f5a86 0a3178099d31 c8ddfa007847]
	I1204 15:41:23.299197    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:41:23.309926    9926 logs.go:282] 1 containers: [a82022fd2242]
	I1204 15:41:23.310005    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:41:23.320148    9926 logs.go:282] 1 containers: [8f7d833df5c3]
	I1204 15:41:23.320227    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:41:23.331119    9926 logs.go:282] 1 containers: [bdf1070f199f]
	I1204 15:41:23.331201    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:41:23.341440    9926 logs.go:282] 0 containers: []
	W1204 15:41:23.341451    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:41:23.341527    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:41:23.351564    9926 logs.go:282] 1 containers: [bdb47901ddb7]
	I1204 15:41:23.351582    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:41:23.351589    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:41:23.363524    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:41:23.363538    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:41:23.368424    9926 logs.go:123] Gathering logs for kube-proxy [8f7d833df5c3] ...
	I1204 15:41:23.368431    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d833df5c3"
	I1204 15:41:23.380058    9926 logs.go:123] Gathering logs for etcd [17b0ed658f6c] ...
	I1204 15:41:23.380073    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b0ed658f6c"
	I1204 15:41:23.394497    9926 logs.go:123] Gathering logs for coredns [11d10ac44548] ...
	I1204 15:41:23.394510    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d10ac44548"
	I1204 15:41:23.406129    9926 logs.go:123] Gathering logs for coredns [0a3178099d31] ...
	I1204 15:41:23.406139    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a3178099d31"
	I1204 15:41:23.418115    9926 logs.go:123] Gathering logs for coredns [c8ddfa007847] ...
	I1204 15:41:23.418126    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8ddfa007847"
	I1204 15:41:23.429832    9926 logs.go:123] Gathering logs for storage-provisioner [bdb47901ddb7] ...
	I1204 15:41:23.429849    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb47901ddb7"
	I1204 15:41:23.442179    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:41:23.442193    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:41:23.465097    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:41:23.465110    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:41:23.501939    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:41:23.501949    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:41:23.537216    9926 logs.go:123] Gathering logs for kube-apiserver [f3328f94ed0d] ...
	I1204 15:41:23.537226    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3328f94ed0d"
	I1204 15:41:23.551866    9926 logs.go:123] Gathering logs for kube-scheduler [a82022fd2242] ...
	I1204 15:41:23.551880    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a82022fd2242"
	I1204 15:41:23.568223    9926 logs.go:123] Gathering logs for coredns [722dc28f5a86] ...
	I1204 15:41:23.568238    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 722dc28f5a86"
	I1204 15:41:23.582474    9926 logs.go:123] Gathering logs for kube-controller-manager [bdf1070f199f] ...
	I1204 15:41:23.582488    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdf1070f199f"
	I1204 15:41:26.104121    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:41:27.711904   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:41:27.712039   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:41:27.723839   10206 logs.go:282] 2 containers: [bc235f7c7828 1d1ae4543cd6]
	I1204 15:41:27.723922   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:41:27.734092   10206 logs.go:282] 2 containers: [fc48923b85fc 4f1790594676]
	I1204 15:41:27.734173   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:41:27.745781   10206 logs.go:282] 1 containers: [467f47d9f689]
	I1204 15:41:27.745858   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:41:27.756200   10206 logs.go:282] 2 containers: [1f92512b0d23 b0ad1b935d01]
	I1204 15:41:27.756281   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:41:27.767062   10206 logs.go:282] 1 containers: [e290e266aab8]
	I1204 15:41:27.767147   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:41:27.778107   10206 logs.go:282] 2 containers: [4469186baff0 63473edefa8f]
	I1204 15:41:27.778182   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:41:27.788167   10206 logs.go:282] 0 containers: []
	W1204 15:41:27.788181   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:41:27.788254   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:41:27.800007   10206 logs.go:282] 2 containers: [4bf3c6abead9 4a54859eb1e5]
	I1204 15:41:27.800027   10206 logs.go:123] Gathering logs for kube-scheduler [1f92512b0d23] ...
	I1204 15:41:27.800032   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f92512b0d23"
	I1204 15:41:27.811798   10206 logs.go:123] Gathering logs for kube-scheduler [b0ad1b935d01] ...
	I1204 15:41:27.811810   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0ad1b935d01"
	I1204 15:41:27.826433   10206 logs.go:123] Gathering logs for storage-provisioner [4bf3c6abead9] ...
	I1204 15:41:27.826442   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bf3c6abead9"
	I1204 15:41:27.837920   10206 logs.go:123] Gathering logs for storage-provisioner [4a54859eb1e5] ...
	I1204 15:41:27.837931   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a54859eb1e5"
	I1204 15:41:27.849494   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:41:27.849505   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:41:27.873788   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:41:27.873796   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:41:27.908412   10206 logs.go:123] Gathering logs for coredns [467f47d9f689] ...
	I1204 15:41:27.908425   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467f47d9f689"
	I1204 15:41:27.921722   10206 logs.go:123] Gathering logs for etcd [fc48923b85fc] ...
	I1204 15:41:27.921733   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc48923b85fc"
	I1204 15:41:27.935540   10206 logs.go:123] Gathering logs for etcd [4f1790594676] ...
	I1204 15:41:27.935553   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f1790594676"
	I1204 15:41:27.950340   10206 logs.go:123] Gathering logs for kube-controller-manager [4469186baff0] ...
	I1204 15:41:27.950349   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4469186baff0"
	I1204 15:41:27.968478   10206 logs.go:123] Gathering logs for kube-controller-manager [63473edefa8f] ...
	I1204 15:41:27.968488   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63473edefa8f"
	I1204 15:41:27.980969   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:41:27.980979   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:41:27.996076   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:41:27.996087   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:41:28.000274   10206 logs.go:123] Gathering logs for kube-apiserver [bc235f7c7828] ...
	I1204 15:41:28.000283   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc235f7c7828"
	I1204 15:41:28.014781   10206 logs.go:123] Gathering logs for kube-proxy [e290e266aab8] ...
	I1204 15:41:28.014790   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e290e266aab8"
	I1204 15:41:28.026911   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:41:28.026924   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:41:28.064434   10206 logs.go:123] Gathering logs for kube-apiserver [1d1ae4543cd6] ...
	I1204 15:41:28.064442   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d1ae4543cd6"
	I1204 15:41:30.603906   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:41:31.106604    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:41:31.106928    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:41:31.137856    9926 logs.go:282] 1 containers: [f3328f94ed0d]
	I1204 15:41:31.137958    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:41:31.151097    9926 logs.go:282] 1 containers: [17b0ed658f6c]
	I1204 15:41:31.151178    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:41:31.165869    9926 logs.go:282] 4 containers: [11d10ac44548 722dc28f5a86 0a3178099d31 c8ddfa007847]
	I1204 15:41:31.165943    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:41:31.176511    9926 logs.go:282] 1 containers: [a82022fd2242]
	I1204 15:41:31.176588    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:41:31.187685    9926 logs.go:282] 1 containers: [8f7d833df5c3]
	I1204 15:41:31.187765    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:41:31.198360    9926 logs.go:282] 1 containers: [bdf1070f199f]
	I1204 15:41:31.198438    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:41:31.208695    9926 logs.go:282] 0 containers: []
	W1204 15:41:31.208704    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:41:31.208771    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:41:31.220729    9926 logs.go:282] 1 containers: [bdb47901ddb7]
	I1204 15:41:31.220751    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:41:31.220758    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:41:31.236556    9926 logs.go:123] Gathering logs for coredns [11d10ac44548] ...
	I1204 15:41:31.236569    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d10ac44548"
	I1204 15:41:31.248753    9926 logs.go:123] Gathering logs for kube-scheduler [a82022fd2242] ...
	I1204 15:41:31.248768    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a82022fd2242"
	I1204 15:41:31.263450    9926 logs.go:123] Gathering logs for storage-provisioner [bdb47901ddb7] ...
	I1204 15:41:31.263460    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb47901ddb7"
	I1204 15:41:31.275292    9926 logs.go:123] Gathering logs for kube-proxy [8f7d833df5c3] ...
	I1204 15:41:31.275307    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d833df5c3"
	I1204 15:41:31.287210    9926 logs.go:123] Gathering logs for kube-controller-manager [bdf1070f199f] ...
	I1204 15:41:31.287221    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdf1070f199f"
	I1204 15:41:31.306081    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:41:31.306091    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:41:31.330926    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:41:31.330935    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:41:31.335598    9926 logs.go:123] Gathering logs for etcd [17b0ed658f6c] ...
	I1204 15:41:31.335603    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b0ed658f6c"
	I1204 15:41:31.349777    9926 logs.go:123] Gathering logs for coredns [c8ddfa007847] ...
	I1204 15:41:31.349791    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8ddfa007847"
	I1204 15:41:31.361313    9926 logs.go:123] Gathering logs for coredns [722dc28f5a86] ...
	I1204 15:41:31.361323    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 722dc28f5a86"
	I1204 15:41:31.372724    9926 logs.go:123] Gathering logs for coredns [0a3178099d31] ...
	I1204 15:41:31.372738    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a3178099d31"
	I1204 15:41:31.384640    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:41:31.384651    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:41:31.423167    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:41:31.423179    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:41:31.457522    9926 logs.go:123] Gathering logs for kube-apiserver [f3328f94ed0d] ...
	I1204 15:41:31.457537    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3328f94ed0d"
	I1204 15:41:35.606334   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:41:35.606785   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:41:35.669138   10206 logs.go:282] 2 containers: [bc235f7c7828 1d1ae4543cd6]
	I1204 15:41:35.669252   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:41:35.697918   10206 logs.go:282] 2 containers: [fc48923b85fc 4f1790594676]
	I1204 15:41:35.698088   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:41:35.719009   10206 logs.go:282] 1 containers: [467f47d9f689]
	I1204 15:41:35.719093   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:41:35.730086   10206 logs.go:282] 2 containers: [1f92512b0d23 b0ad1b935d01]
	I1204 15:41:35.730169   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:41:35.741175   10206 logs.go:282] 1 containers: [e290e266aab8]
	I1204 15:41:35.741259   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:41:35.751528   10206 logs.go:282] 2 containers: [4469186baff0 63473edefa8f]
	I1204 15:41:35.751606   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:41:35.761769   10206 logs.go:282] 0 containers: []
	W1204 15:41:35.761782   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:41:35.761846   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:41:35.772561   10206 logs.go:282] 2 containers: [4bf3c6abead9 4a54859eb1e5]
	I1204 15:41:35.772578   10206 logs.go:123] Gathering logs for storage-provisioner [4a54859eb1e5] ...
	I1204 15:41:35.772582   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a54859eb1e5"
	I1204 15:41:35.783924   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:41:35.783936   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:41:35.807224   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:41:35.807233   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:41:35.846496   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:41:35.846506   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:41:35.850923   10206 logs.go:123] Gathering logs for etcd [4f1790594676] ...
	I1204 15:41:35.850932   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f1790594676"
	I1204 15:41:35.865352   10206 logs.go:123] Gathering logs for kube-proxy [e290e266aab8] ...
	I1204 15:41:35.865362   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e290e266aab8"
	I1204 15:41:35.876963   10206 logs.go:123] Gathering logs for storage-provisioner [4bf3c6abead9] ...
	I1204 15:41:35.876977   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bf3c6abead9"
	I1204 15:41:35.888334   10206 logs.go:123] Gathering logs for kube-apiserver [1d1ae4543cd6] ...
	I1204 15:41:35.888345   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d1ae4543cd6"
	I1204 15:41:35.927698   10206 logs.go:123] Gathering logs for kube-scheduler [1f92512b0d23] ...
	I1204 15:41:35.927710   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f92512b0d23"
	I1204 15:41:35.940052   10206 logs.go:123] Gathering logs for kube-controller-manager [4469186baff0] ...
	I1204 15:41:35.940066   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4469186baff0"
	I1204 15:41:35.956780   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:41:35.956790   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:41:35.997729   10206 logs.go:123] Gathering logs for kube-apiserver [bc235f7c7828] ...
	I1204 15:41:35.997741   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc235f7c7828"
	I1204 15:41:36.011714   10206 logs.go:123] Gathering logs for etcd [fc48923b85fc] ...
	I1204 15:41:36.011726   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc48923b85fc"
	I1204 15:41:36.025460   10206 logs.go:123] Gathering logs for coredns [467f47d9f689] ...
	I1204 15:41:36.025470   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467f47d9f689"
	I1204 15:41:36.037074   10206 logs.go:123] Gathering logs for kube-scheduler [b0ad1b935d01] ...
	I1204 15:41:36.037085   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0ad1b935d01"
	I1204 15:41:36.052673   10206 logs.go:123] Gathering logs for kube-controller-manager [63473edefa8f] ...
	I1204 15:41:36.052685   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63473edefa8f"
	I1204 15:41:36.067130   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:41:36.067142   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:41:33.974309    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:41:38.581602   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:41:38.976786    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:41:38.976964    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:41:38.991848    9926 logs.go:282] 1 containers: [f3328f94ed0d]
	I1204 15:41:38.991943    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:41:39.002682    9926 logs.go:282] 1 containers: [17b0ed658f6c]
	I1204 15:41:39.002756    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:41:39.012909    9926 logs.go:282] 4 containers: [11d10ac44548 722dc28f5a86 0a3178099d31 c8ddfa007847]
	I1204 15:41:39.012993    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:41:39.023161    9926 logs.go:282] 1 containers: [a82022fd2242]
	I1204 15:41:39.023239    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:41:39.033714    9926 logs.go:282] 1 containers: [8f7d833df5c3]
	I1204 15:41:39.033791    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:41:39.044170    9926 logs.go:282] 1 containers: [bdf1070f199f]
	I1204 15:41:39.044238    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:41:39.054550    9926 logs.go:282] 0 containers: []
	W1204 15:41:39.054561    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:41:39.054626    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:41:39.064773    9926 logs.go:282] 1 containers: [bdb47901ddb7]
	I1204 15:41:39.064791    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:41:39.064797    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:41:39.069561    9926 logs.go:123] Gathering logs for coredns [11d10ac44548] ...
	I1204 15:41:39.069571    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d10ac44548"
	I1204 15:41:39.081234    9926 logs.go:123] Gathering logs for coredns [0a3178099d31] ...
	I1204 15:41:39.081244    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a3178099d31"
	I1204 15:41:39.092967    9926 logs.go:123] Gathering logs for storage-provisioner [bdb47901ddb7] ...
	I1204 15:41:39.092977    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb47901ddb7"
	I1204 15:41:39.104744    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:41:39.104755    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:41:39.129966    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:41:39.129975    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:41:39.164180    9926 logs.go:123] Gathering logs for etcd [17b0ed658f6c] ...
	I1204 15:41:39.164190    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b0ed658f6c"
	I1204 15:41:39.178025    9926 logs.go:123] Gathering logs for coredns [722dc28f5a86] ...
	I1204 15:41:39.178037    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 722dc28f5a86"
	I1204 15:41:39.189537    9926 logs.go:123] Gathering logs for kube-scheduler [a82022fd2242] ...
	I1204 15:41:39.189548    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a82022fd2242"
	I1204 15:41:39.204733    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:41:39.204742    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:41:39.243239    9926 logs.go:123] Gathering logs for kube-apiserver [f3328f94ed0d] ...
	I1204 15:41:39.243248    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3328f94ed0d"
	I1204 15:41:39.257380    9926 logs.go:123] Gathering logs for coredns [c8ddfa007847] ...
	I1204 15:41:39.257393    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8ddfa007847"
	I1204 15:41:39.269619    9926 logs.go:123] Gathering logs for kube-proxy [8f7d833df5c3] ...
	I1204 15:41:39.269633    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d833df5c3"
	I1204 15:41:39.281469    9926 logs.go:123] Gathering logs for kube-controller-manager [bdf1070f199f] ...
	I1204 15:41:39.281481    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdf1070f199f"
	I1204 15:41:39.301569    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:41:39.301580    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:41:43.584425   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:41:43.584962   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:41:43.625945   10206 logs.go:282] 2 containers: [bc235f7c7828 1d1ae4543cd6]
	I1204 15:41:43.626097   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:41:43.648177   10206 logs.go:282] 2 containers: [fc48923b85fc 4f1790594676]
	I1204 15:41:43.648312   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:41:43.664415   10206 logs.go:282] 1 containers: [467f47d9f689]
	I1204 15:41:43.664495   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:41:43.676916   10206 logs.go:282] 2 containers: [1f92512b0d23 b0ad1b935d01]
	I1204 15:41:43.676995   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:41:43.688597   10206 logs.go:282] 1 containers: [e290e266aab8]
	I1204 15:41:43.688670   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:41:43.700140   10206 logs.go:282] 2 containers: [4469186baff0 63473edefa8f]
	I1204 15:41:43.700224   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:41:43.723670   10206 logs.go:282] 0 containers: []
	W1204 15:41:43.723683   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:41:43.723760   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:41:43.734670   10206 logs.go:282] 2 containers: [4bf3c6abead9 4a54859eb1e5]
	I1204 15:41:43.734689   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:41:43.734694   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:41:43.739042   10206 logs.go:123] Gathering logs for kube-controller-manager [4469186baff0] ...
	I1204 15:41:43.739049   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4469186baff0"
	I1204 15:41:43.757243   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:41:43.757257   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:41:43.778920   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:41:43.778931   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:41:43.790738   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:41:43.790751   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:41:43.827784   10206 logs.go:123] Gathering logs for coredns [467f47d9f689] ...
	I1204 15:41:43.827794   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467f47d9f689"
	I1204 15:41:43.838881   10206 logs.go:123] Gathering logs for kube-scheduler [b0ad1b935d01] ...
	I1204 15:41:43.838897   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0ad1b935d01"
	I1204 15:41:43.857119   10206 logs.go:123] Gathering logs for storage-provisioner [4bf3c6abead9] ...
	I1204 15:41:43.857131   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bf3c6abead9"
	I1204 15:41:43.873042   10206 logs.go:123] Gathering logs for storage-provisioner [4a54859eb1e5] ...
	I1204 15:41:43.873055   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a54859eb1e5"
	I1204 15:41:43.884452   10206 logs.go:123] Gathering logs for kube-apiserver [bc235f7c7828] ...
	I1204 15:41:43.884463   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc235f7c7828"
	I1204 15:41:43.899020   10206 logs.go:123] Gathering logs for kube-apiserver [1d1ae4543cd6] ...
	I1204 15:41:43.899033   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d1ae4543cd6"
	I1204 15:41:43.938126   10206 logs.go:123] Gathering logs for kube-proxy [e290e266aab8] ...
	I1204 15:41:43.938138   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e290e266aab8"
	I1204 15:41:43.949675   10206 logs.go:123] Gathering logs for kube-controller-manager [63473edefa8f] ...
	I1204 15:41:43.949686   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63473edefa8f"
	I1204 15:41:43.962160   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:41:43.962173   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:41:44.002054   10206 logs.go:123] Gathering logs for etcd [fc48923b85fc] ...
	I1204 15:41:44.002067   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc48923b85fc"
	I1204 15:41:44.016854   10206 logs.go:123] Gathering logs for etcd [4f1790594676] ...
	I1204 15:41:44.016865   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f1790594676"
	I1204 15:41:44.032096   10206 logs.go:123] Gathering logs for kube-scheduler [1f92512b0d23] ...
	I1204 15:41:44.032106   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f92512b0d23"
	I1204 15:41:46.545809   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:41:41.815546    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:41:51.548261   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:41:51.548750   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:41:51.586966   10206 logs.go:282] 2 containers: [bc235f7c7828 1d1ae4543cd6]
	I1204 15:41:51.587123   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:41:51.607849   10206 logs.go:282] 2 containers: [fc48923b85fc 4f1790594676]
	I1204 15:41:51.607957   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:41:51.623675   10206 logs.go:282] 1 containers: [467f47d9f689]
	I1204 15:41:51.623780   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:41:51.641374   10206 logs.go:282] 2 containers: [1f92512b0d23 b0ad1b935d01]
	I1204 15:41:51.641458   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:41:51.652168   10206 logs.go:282] 1 containers: [e290e266aab8]
	I1204 15:41:51.652248   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:41:51.662532   10206 logs.go:282] 2 containers: [4469186baff0 63473edefa8f]
	I1204 15:41:51.662608   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:41:51.672749   10206 logs.go:282] 0 containers: []
	W1204 15:41:51.672759   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:41:51.672820   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:41:51.683251   10206 logs.go:282] 2 containers: [4bf3c6abead9 4a54859eb1e5]
	I1204 15:41:51.683271   10206 logs.go:123] Gathering logs for kube-apiserver [bc235f7c7828] ...
	I1204 15:41:51.683277   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc235f7c7828"
	I1204 15:41:51.698752   10206 logs.go:123] Gathering logs for kube-scheduler [b0ad1b935d01] ...
	I1204 15:41:51.698761   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0ad1b935d01"
	I1204 15:41:51.714170   10206 logs.go:123] Gathering logs for storage-provisioner [4a54859eb1e5] ...
	I1204 15:41:51.714181   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a54859eb1e5"
	I1204 15:41:51.725646   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:41:51.725657   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:41:51.765631   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:41:51.765642   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:41:46.817875    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:41:46.817987    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:41:46.835111    9926 logs.go:282] 1 containers: [f3328f94ed0d]
	I1204 15:41:46.835192    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:41:46.846017    9926 logs.go:282] 1 containers: [17b0ed658f6c]
	I1204 15:41:46.846105    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:41:46.857258    9926 logs.go:282] 4 containers: [11d10ac44548 722dc28f5a86 0a3178099d31 c8ddfa007847]
	I1204 15:41:46.857342    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:41:46.867642    9926 logs.go:282] 1 containers: [a82022fd2242]
	I1204 15:41:46.867717    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:41:46.877851    9926 logs.go:282] 1 containers: [8f7d833df5c3]
	I1204 15:41:46.877935    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:41:46.888805    9926 logs.go:282] 1 containers: [bdf1070f199f]
	I1204 15:41:46.888877    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:41:46.899111    9926 logs.go:282] 0 containers: []
	W1204 15:41:46.899122    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:41:46.899193    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:41:46.909765    9926 logs.go:282] 1 containers: [bdb47901ddb7]
	I1204 15:41:46.909783    9926 logs.go:123] Gathering logs for etcd [17b0ed658f6c] ...
	I1204 15:41:46.909790    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b0ed658f6c"
	I1204 15:41:46.924139    9926 logs.go:123] Gathering logs for storage-provisioner [bdb47901ddb7] ...
	I1204 15:41:46.924151    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb47901ddb7"
	I1204 15:41:46.935907    9926 logs.go:123] Gathering logs for coredns [722dc28f5a86] ...
	I1204 15:41:46.935921    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 722dc28f5a86"
	I1204 15:41:46.957974    9926 logs.go:123] Gathering logs for coredns [c8ddfa007847] ...
	I1204 15:41:46.957986    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8ddfa007847"
	I1204 15:41:46.971748    9926 logs.go:123] Gathering logs for kube-proxy [8f7d833df5c3] ...
	I1204 15:41:46.971761    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d833df5c3"
	I1204 15:41:46.984908    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:41:46.984921    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:41:47.011282    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:41:47.011295    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:41:47.016787    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:41:47.016798    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:41:47.054467    9926 logs.go:123] Gathering logs for coredns [11d10ac44548] ...
	I1204 15:41:47.054484    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d10ac44548"
	I1204 15:41:47.069260    9926 logs.go:123] Gathering logs for coredns [0a3178099d31] ...
	I1204 15:41:47.069273    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a3178099d31"
	I1204 15:41:47.082134    9926 logs.go:123] Gathering logs for kube-scheduler [a82022fd2242] ...
	I1204 15:41:47.082146    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a82022fd2242"
	I1204 15:41:47.099254    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:41:47.099274    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:41:47.137355    9926 logs.go:123] Gathering logs for kube-apiserver [f3328f94ed0d] ...
	I1204 15:41:47.137368    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3328f94ed0d"
	I1204 15:41:47.151662    9926 logs.go:123] Gathering logs for kube-controller-manager [bdf1070f199f] ...
	I1204 15:41:47.151673    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdf1070f199f"
	I1204 15:41:47.177399    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:41:47.177411    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:41:49.692411    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:41:51.803044   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:41:51.803057   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:41:51.827878   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:41:51.827902   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:41:51.840494   10206 logs.go:123] Gathering logs for kube-apiserver [1d1ae4543cd6] ...
	I1204 15:41:51.840514   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d1ae4543cd6"
	I1204 15:41:51.878988   10206 logs.go:123] Gathering logs for etcd [fc48923b85fc] ...
	I1204 15:41:51.879000   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc48923b85fc"
	I1204 15:41:51.893312   10206 logs.go:123] Gathering logs for kube-controller-manager [63473edefa8f] ...
	I1204 15:41:51.893322   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63473edefa8f"
	I1204 15:41:51.905707   10206 logs.go:123] Gathering logs for etcd [4f1790594676] ...
	I1204 15:41:51.905719   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f1790594676"
	I1204 15:41:51.920440   10206 logs.go:123] Gathering logs for kube-scheduler [1f92512b0d23] ...
	I1204 15:41:51.920454   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f92512b0d23"
	I1204 15:41:51.932692   10206 logs.go:123] Gathering logs for kube-proxy [e290e266aab8] ...
	I1204 15:41:51.932704   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e290e266aab8"
	I1204 15:41:51.944334   10206 logs.go:123] Gathering logs for kube-controller-manager [4469186baff0] ...
	I1204 15:41:51.944346   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4469186baff0"
	I1204 15:41:51.962021   10206 logs.go:123] Gathering logs for storage-provisioner [4bf3c6abead9] ...
	I1204 15:41:51.962031   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bf3c6abead9"
	I1204 15:41:51.973682   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:41:51.973693   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:41:51.978257   10206 logs.go:123] Gathering logs for coredns [467f47d9f689] ...
	I1204 15:41:51.978265   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467f47d9f689"
	I1204 15:41:54.494476   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:41:54.694195    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:41:54.694631    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:41:54.732906    9926 logs.go:282] 1 containers: [f3328f94ed0d]
	I1204 15:41:54.733054    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:41:54.752823    9926 logs.go:282] 1 containers: [17b0ed658f6c]
	I1204 15:41:54.752918    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:41:54.767031    9926 logs.go:282] 4 containers: [11d10ac44548 722dc28f5a86 0a3178099d31 c8ddfa007847]
	I1204 15:41:54.767123    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:41:54.782051    9926 logs.go:282] 1 containers: [a82022fd2242]
	I1204 15:41:54.782124    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:41:54.792857    9926 logs.go:282] 1 containers: [8f7d833df5c3]
	I1204 15:41:54.792927    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:41:54.803465    9926 logs.go:282] 1 containers: [bdf1070f199f]
	I1204 15:41:54.803562    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:41:54.814140    9926 logs.go:282] 0 containers: []
	W1204 15:41:54.814157    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:41:54.814242    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:41:54.825098    9926 logs.go:282] 1 containers: [bdb47901ddb7]
	I1204 15:41:54.825115    9926 logs.go:123] Gathering logs for kube-controller-manager [bdf1070f199f] ...
	I1204 15:41:54.825121    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdf1070f199f"
	I1204 15:41:54.842819    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:41:54.842834    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:41:54.855241    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:41:54.855253    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:41:54.893320    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:41:54.893330    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:41:54.898001    9926 logs.go:123] Gathering logs for kube-apiserver [f3328f94ed0d] ...
	I1204 15:41:54.898011    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3328f94ed0d"
	I1204 15:41:54.912593    9926 logs.go:123] Gathering logs for coredns [c8ddfa007847] ...
	I1204 15:41:54.912605    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8ddfa007847"
	I1204 15:41:54.923793    9926 logs.go:123] Gathering logs for coredns [722dc28f5a86] ...
	I1204 15:41:54.923807    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 722dc28f5a86"
	I1204 15:41:54.936176    9926 logs.go:123] Gathering logs for coredns [0a3178099d31] ...
	I1204 15:41:54.936187    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a3178099d31"
	I1204 15:41:54.948564    9926 logs.go:123] Gathering logs for kube-scheduler [a82022fd2242] ...
	I1204 15:41:54.948578    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a82022fd2242"
	I1204 15:41:54.966576    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:41:54.966587    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:41:54.991686    9926 logs.go:123] Gathering logs for storage-provisioner [bdb47901ddb7] ...
	I1204 15:41:54.991696    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb47901ddb7"
	I1204 15:41:55.004622    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:41:55.004633    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:41:55.039172    9926 logs.go:123] Gathering logs for etcd [17b0ed658f6c] ...
	I1204 15:41:55.039184    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b0ed658f6c"
	I1204 15:41:55.053535    9926 logs.go:123] Gathering logs for coredns [11d10ac44548] ...
	I1204 15:41:55.053546    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d10ac44548"
	I1204 15:41:55.065364    9926 logs.go:123] Gathering logs for kube-proxy [8f7d833df5c3] ...
	I1204 15:41:55.065375    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d833df5c3"
	I1204 15:41:59.497100   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:41:59.497343   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:41:59.516928   10206 logs.go:282] 2 containers: [bc235f7c7828 1d1ae4543cd6]
	I1204 15:41:59.517041   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:41:59.533850   10206 logs.go:282] 2 containers: [fc48923b85fc 4f1790594676]
	I1204 15:41:59.533931   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:41:59.549622   10206 logs.go:282] 1 containers: [467f47d9f689]
	I1204 15:41:59.549704   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:41:59.560352   10206 logs.go:282] 2 containers: [1f92512b0d23 b0ad1b935d01]
	I1204 15:41:59.560435   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:41:59.570844   10206 logs.go:282] 1 containers: [e290e266aab8]
	I1204 15:41:59.570917   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:41:59.581469   10206 logs.go:282] 2 containers: [4469186baff0 63473edefa8f]
	I1204 15:41:59.581546   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:41:59.591395   10206 logs.go:282] 0 containers: []
	W1204 15:41:59.591409   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:41:59.591469   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:41:59.602676   10206 logs.go:282] 2 containers: [4bf3c6abead9 4a54859eb1e5]
	I1204 15:41:59.602700   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:41:59.602706   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:41:59.637886   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:41:59.637896   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:41:59.642845   10206 logs.go:123] Gathering logs for etcd [fc48923b85fc] ...
	I1204 15:41:59.642855   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc48923b85fc"
	I1204 15:41:59.657177   10206 logs.go:123] Gathering logs for kube-scheduler [b0ad1b935d01] ...
	I1204 15:41:59.657186   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0ad1b935d01"
	I1204 15:41:59.672078   10206 logs.go:123] Gathering logs for kube-controller-manager [63473edefa8f] ...
	I1204 15:41:59.672090   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63473edefa8f"
	I1204 15:41:59.684584   10206 logs.go:123] Gathering logs for storage-provisioner [4a54859eb1e5] ...
	I1204 15:41:59.684594   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a54859eb1e5"
	I1204 15:41:59.696127   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:41:59.696139   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:41:59.735053   10206 logs.go:123] Gathering logs for kube-scheduler [1f92512b0d23] ...
	I1204 15:41:59.735062   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f92512b0d23"
	I1204 15:41:59.746898   10206 logs.go:123] Gathering logs for kube-proxy [e290e266aab8] ...
	I1204 15:41:59.746909   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e290e266aab8"
	I1204 15:41:59.759005   10206 logs.go:123] Gathering logs for storage-provisioner [4bf3c6abead9] ...
	I1204 15:41:59.759017   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bf3c6abead9"
	I1204 15:41:59.770568   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:41:59.770581   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:41:59.794183   10206 logs.go:123] Gathering logs for kube-apiserver [bc235f7c7828] ...
	I1204 15:41:59.794196   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc235f7c7828"
	I1204 15:41:59.810417   10206 logs.go:123] Gathering logs for kube-apiserver [1d1ae4543cd6] ...
	I1204 15:41:59.810430   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d1ae4543cd6"
	I1204 15:41:59.873142   10206 logs.go:123] Gathering logs for etcd [4f1790594676] ...
	I1204 15:41:59.873162   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f1790594676"
	I1204 15:41:59.887779   10206 logs.go:123] Gathering logs for coredns [467f47d9f689] ...
	I1204 15:41:59.887793   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467f47d9f689"
	I1204 15:41:59.899035   10206 logs.go:123] Gathering logs for kube-controller-manager [4469186baff0] ...
	I1204 15:41:59.899047   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4469186baff0"
	I1204 15:41:59.916655   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:41:59.916668   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:41:57.579251    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:42:02.431798   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:42:02.581761    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:42:02.581983    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:42:02.603560    9926 logs.go:282] 1 containers: [f3328f94ed0d]
	I1204 15:42:02.603684    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:42:02.619197    9926 logs.go:282] 1 containers: [17b0ed658f6c]
	I1204 15:42:02.619292    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:42:02.633347    9926 logs.go:282] 4 containers: [11d10ac44548 722dc28f5a86 0a3178099d31 c8ddfa007847]
	I1204 15:42:02.633436    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:42:02.649634    9926 logs.go:282] 1 containers: [a82022fd2242]
	I1204 15:42:02.649709    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:42:02.662590    9926 logs.go:282] 1 containers: [8f7d833df5c3]
	I1204 15:42:02.662660    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:42:02.673006    9926 logs.go:282] 1 containers: [bdf1070f199f]
	I1204 15:42:02.673071    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:42:02.683545    9926 logs.go:282] 0 containers: []
	W1204 15:42:02.683557    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:42:02.683623    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:42:02.693656    9926 logs.go:282] 1 containers: [bdb47901ddb7]
	I1204 15:42:02.693672    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:42:02.693678    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:42:02.730025    9926 logs.go:123] Gathering logs for etcd [17b0ed658f6c] ...
	I1204 15:42:02.730039    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b0ed658f6c"
	I1204 15:42:02.744597    9926 logs.go:123] Gathering logs for kube-scheduler [a82022fd2242] ...
	I1204 15:42:02.744612    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a82022fd2242"
	I1204 15:42:02.759892    9926 logs.go:123] Gathering logs for kube-proxy [8f7d833df5c3] ...
	I1204 15:42:02.759907    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d833df5c3"
	I1204 15:42:02.774384    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:42:02.774395    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:42:02.799258    9926 logs.go:123] Gathering logs for coredns [722dc28f5a86] ...
	I1204 15:42:02.799272    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 722dc28f5a86"
	I1204 15:42:02.812041    9926 logs.go:123] Gathering logs for coredns [0a3178099d31] ...
	I1204 15:42:02.812051    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a3178099d31"
	I1204 15:42:02.824440    9926 logs.go:123] Gathering logs for storage-provisioner [bdb47901ddb7] ...
	I1204 15:42:02.824454    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb47901ddb7"
	I1204 15:42:02.837351    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:42:02.837364    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:42:02.842220    9926 logs.go:123] Gathering logs for coredns [11d10ac44548] ...
	I1204 15:42:02.842228    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d10ac44548"
	I1204 15:42:02.854182    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:42:02.854194    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:42:02.892560    9926 logs.go:123] Gathering logs for kube-apiserver [f3328f94ed0d] ...
	I1204 15:42:02.892575    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3328f94ed0d"
	I1204 15:42:02.907263    9926 logs.go:123] Gathering logs for coredns [c8ddfa007847] ...
	I1204 15:42:02.907276    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8ddfa007847"
	I1204 15:42:02.918869    9926 logs.go:123] Gathering logs for kube-controller-manager [bdf1070f199f] ...
	I1204 15:42:02.918883    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdf1070f199f"
	I1204 15:42:02.942148    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:42:02.942160    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:42:05.456840    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:42:07.434181   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:42:07.434462   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:42:07.465750   10206 logs.go:282] 2 containers: [bc235f7c7828 1d1ae4543cd6]
	I1204 15:42:07.465900   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:42:07.486399   10206 logs.go:282] 2 containers: [fc48923b85fc 4f1790594676]
	I1204 15:42:07.486524   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:42:07.500547   10206 logs.go:282] 1 containers: [467f47d9f689]
	I1204 15:42:07.500645   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:42:07.513447   10206 logs.go:282] 2 containers: [1f92512b0d23 b0ad1b935d01]
	I1204 15:42:07.513530   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:42:07.524200   10206 logs.go:282] 1 containers: [e290e266aab8]
	I1204 15:42:07.524273   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:42:07.535441   10206 logs.go:282] 2 containers: [4469186baff0 63473edefa8f]
	I1204 15:42:07.535524   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:42:07.548230   10206 logs.go:282] 0 containers: []
	W1204 15:42:07.548244   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:42:07.548309   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:42:07.558969   10206 logs.go:282] 2 containers: [4bf3c6abead9 4a54859eb1e5]
	I1204 15:42:07.558986   10206 logs.go:123] Gathering logs for etcd [4f1790594676] ...
	I1204 15:42:07.558991   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f1790594676"
	I1204 15:42:07.580511   10206 logs.go:123] Gathering logs for kube-scheduler [1f92512b0d23] ...
	I1204 15:42:07.580524   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f92512b0d23"
	I1204 15:42:07.592574   10206 logs.go:123] Gathering logs for kube-controller-manager [4469186baff0] ...
	I1204 15:42:07.592587   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4469186baff0"
	I1204 15:42:07.610590   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:42:07.610600   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:42:07.632876   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:42:07.632886   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:42:07.644781   10206 logs.go:123] Gathering logs for kube-apiserver [bc235f7c7828] ...
	I1204 15:42:07.644793   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc235f7c7828"
	I1204 15:42:07.659131   10206 logs.go:123] Gathering logs for etcd [fc48923b85fc] ...
	I1204 15:42:07.659141   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc48923b85fc"
	I1204 15:42:07.672669   10206 logs.go:123] Gathering logs for kube-scheduler [b0ad1b935d01] ...
	I1204 15:42:07.672681   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0ad1b935d01"
	I1204 15:42:07.688974   10206 logs.go:123] Gathering logs for kube-proxy [e290e266aab8] ...
	I1204 15:42:07.688986   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e290e266aab8"
	I1204 15:42:07.709871   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:42:07.709885   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:42:07.714733   10206 logs.go:123] Gathering logs for kube-apiserver [1d1ae4543cd6] ...
	I1204 15:42:07.714739   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d1ae4543cd6"
	I1204 15:42:07.760376   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:42:07.760386   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:42:07.794462   10206 logs.go:123] Gathering logs for storage-provisioner [4a54859eb1e5] ...
	I1204 15:42:07.794473   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a54859eb1e5"
	I1204 15:42:07.813366   10206 logs.go:123] Gathering logs for kube-controller-manager [63473edefa8f] ...
	I1204 15:42:07.813380   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63473edefa8f"
	I1204 15:42:07.825636   10206 logs.go:123] Gathering logs for storage-provisioner [4bf3c6abead9] ...
	I1204 15:42:07.825647   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bf3c6abead9"
	I1204 15:42:07.837301   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:42:07.837313   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:42:07.877499   10206 logs.go:123] Gathering logs for coredns [467f47d9f689] ...
	I1204 15:42:07.877514   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467f47d9f689"
	I1204 15:42:10.391164   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:42:10.459673    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:42:10.459991    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:42:10.493955    9926 logs.go:282] 1 containers: [f3328f94ed0d]
	I1204 15:42:10.494106    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:42:10.513289    9926 logs.go:282] 1 containers: [17b0ed658f6c]
	I1204 15:42:10.513406    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:42:10.528639    9926 logs.go:282] 4 containers: [11d10ac44548 722dc28f5a86 0a3178099d31 c8ddfa007847]
	I1204 15:42:10.528730    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:42:10.540401    9926 logs.go:282] 1 containers: [a82022fd2242]
	I1204 15:42:10.540469    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:42:10.551437    9926 logs.go:282] 1 containers: [8f7d833df5c3]
	I1204 15:42:10.551514    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:42:10.562025    9926 logs.go:282] 1 containers: [bdf1070f199f]
	I1204 15:42:10.562109    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:42:10.572117    9926 logs.go:282] 0 containers: []
	W1204 15:42:10.572132    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:42:10.572205    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:42:10.582399    9926 logs.go:282] 1 containers: [bdb47901ddb7]
	I1204 15:42:10.582415    9926 logs.go:123] Gathering logs for coredns [11d10ac44548] ...
	I1204 15:42:10.582421    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d10ac44548"
	I1204 15:42:10.593889    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:42:10.593903    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:42:10.606438    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:42:10.606452    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:42:10.643661    9926 logs.go:123] Gathering logs for kube-apiserver [f3328f94ed0d] ...
	I1204 15:42:10.643671    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3328f94ed0d"
	I1204 15:42:10.658460    9926 logs.go:123] Gathering logs for coredns [722dc28f5a86] ...
	I1204 15:42:10.658472    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 722dc28f5a86"
	I1204 15:42:10.670055    9926 logs.go:123] Gathering logs for coredns [c8ddfa007847] ...
	I1204 15:42:10.670067    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8ddfa007847"
	I1204 15:42:10.687496    9926 logs.go:123] Gathering logs for storage-provisioner [bdb47901ddb7] ...
	I1204 15:42:10.687507    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb47901ddb7"
	I1204 15:42:10.699808    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:42:10.699819    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:42:10.743490    9926 logs.go:123] Gathering logs for kube-proxy [8f7d833df5c3] ...
	I1204 15:42:10.743504    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d833df5c3"
	I1204 15:42:10.755445    9926 logs.go:123] Gathering logs for kube-controller-manager [bdf1070f199f] ...
	I1204 15:42:10.755460    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdf1070f199f"
	I1204 15:42:10.773430    9926 logs.go:123] Gathering logs for kube-scheduler [a82022fd2242] ...
	I1204 15:42:10.773440    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a82022fd2242"
	I1204 15:42:10.788189    9926 logs.go:123] Gathering logs for etcd [17b0ed658f6c] ...
	I1204 15:42:10.788203    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b0ed658f6c"
	I1204 15:42:10.802067    9926 logs.go:123] Gathering logs for coredns [0a3178099d31] ...
	I1204 15:42:10.802079    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a3178099d31"
	I1204 15:42:10.814089    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:42:10.814099    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:42:10.838932    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:42:10.838943    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:42:15.393876   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:42:15.394079   10206 kubeadm.go:597] duration metric: took 4m4.520836208s to restartPrimaryControlPlane
	W1204 15:42:15.394230   10206 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1204 15:42:15.394293   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1204 15:42:16.470838   10206 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.076517292s)
	I1204 15:42:16.470918   10206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 15:42:16.476126   10206 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 15:42:16.479146   10206 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 15:42:16.482062   10206 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 15:42:16.482067   10206 kubeadm.go:157] found existing configuration files:
	
	I1204 15:42:16.482098   10206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61834 /etc/kubernetes/admin.conf
	I1204 15:42:16.484901   10206 kubeadm.go:163] "https://control-plane.minikube.internal:61834" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:61834 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 15:42:16.484936   10206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 15:42:16.487620   10206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61834 /etc/kubernetes/kubelet.conf
	I1204 15:42:16.490662   10206 kubeadm.go:163] "https://control-plane.minikube.internal:61834" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:61834 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 15:42:16.490698   10206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 15:42:16.494318   10206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61834 /etc/kubernetes/controller-manager.conf
	I1204 15:42:16.497290   10206 kubeadm.go:163] "https://control-plane.minikube.internal:61834" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:61834 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 15:42:16.497316   10206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 15:42:16.499876   10206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61834 /etc/kubernetes/scheduler.conf
	I1204 15:42:16.502884   10206 kubeadm.go:163] "https://control-plane.minikube.internal:61834" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:61834 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 15:42:16.502912   10206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 15:42:16.506199   10206 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 15:42:16.524083   10206 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1204 15:42:16.524115   10206 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 15:42:16.571595   10206 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 15:42:16.571657   10206 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 15:42:16.571709   10206 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1204 15:42:16.625265   10206 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 15:42:16.629404   10206 out.go:235]   - Generating certificates and keys ...
	I1204 15:42:16.629445   10206 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 15:42:16.629479   10206 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 15:42:16.629516   10206 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1204 15:42:16.629545   10206 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1204 15:42:16.629594   10206 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1204 15:42:16.629626   10206 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1204 15:42:16.629661   10206 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1204 15:42:16.629695   10206 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1204 15:42:16.629735   10206 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1204 15:42:16.629773   10206 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1204 15:42:16.629794   10206 kubeadm.go:310] [certs] Using the existing "sa" key
	I1204 15:42:16.629833   10206 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 15:42:16.705723   10206 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 15:42:13.346138    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:42:16.787165   10206 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 15:42:16.897829   10206 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 15:42:16.967000   10206 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 15:42:16.997538   10206 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 15:42:16.997975   10206 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 15:42:16.997996   10206 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 15:42:17.076203   10206 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 15:42:17.080484   10206 out.go:235]   - Booting up control plane ...
	I1204 15:42:17.080531   10206 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 15:42:17.080575   10206 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 15:42:17.080613   10206 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 15:42:17.080658   10206 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 15:42:17.081662   10206 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1204 15:42:18.348528    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:42:18.348641    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:42:18.360037    9926 logs.go:282] 1 containers: [f3328f94ed0d]
	I1204 15:42:18.360121    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:42:18.378088    9926 logs.go:282] 1 containers: [17b0ed658f6c]
	I1204 15:42:18.378173    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:42:18.389330    9926 logs.go:282] 4 containers: [11d10ac44548 722dc28f5a86 0a3178099d31 c8ddfa007847]
	I1204 15:42:18.389412    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:42:18.405731    9926 logs.go:282] 1 containers: [a82022fd2242]
	I1204 15:42:18.405810    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:42:18.416987    9926 logs.go:282] 1 containers: [8f7d833df5c3]
	I1204 15:42:18.417070    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:42:18.430891    9926 logs.go:282] 1 containers: [bdf1070f199f]
	I1204 15:42:18.430967    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:42:18.441466    9926 logs.go:282] 0 containers: []
	W1204 15:42:18.441477    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:42:18.441544    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:42:18.452580    9926 logs.go:282] 1 containers: [bdb47901ddb7]
	I1204 15:42:18.452602    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:42:18.452609    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:42:18.458340    9926 logs.go:123] Gathering logs for kube-apiserver [f3328f94ed0d] ...
	I1204 15:42:18.458352    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3328f94ed0d"
	I1204 15:42:18.473026    9926 logs.go:123] Gathering logs for etcd [17b0ed658f6c] ...
	I1204 15:42:18.473041    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b0ed658f6c"
	I1204 15:42:18.493677    9926 logs.go:123] Gathering logs for coredns [722dc28f5a86] ...
	I1204 15:42:18.493697    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 722dc28f5a86"
	I1204 15:42:18.506196    9926 logs.go:123] Gathering logs for kube-controller-manager [bdf1070f199f] ...
	I1204 15:42:18.506209    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdf1070f199f"
	I1204 15:42:18.524611    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:42:18.524623    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:42:18.564743    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:42:18.564763    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:42:18.604864    9926 logs.go:123] Gathering logs for storage-provisioner [bdb47901ddb7] ...
	I1204 15:42:18.604877    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb47901ddb7"
	I1204 15:42:18.617195    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:42:18.617208    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:42:18.643270    9926 logs.go:123] Gathering logs for coredns [0a3178099d31] ...
	I1204 15:42:18.643284    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a3178099d31"
	I1204 15:42:18.655490    9926 logs.go:123] Gathering logs for coredns [c8ddfa007847] ...
	I1204 15:42:18.655502    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8ddfa007847"
	I1204 15:42:18.667635    9926 logs.go:123] Gathering logs for kube-scheduler [a82022fd2242] ...
	I1204 15:42:18.667646    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a82022fd2242"
	I1204 15:42:18.683286    9926 logs.go:123] Gathering logs for coredns [11d10ac44548] ...
	I1204 15:42:18.683297    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d10ac44548"
	I1204 15:42:18.702118    9926 logs.go:123] Gathering logs for kube-proxy [8f7d833df5c3] ...
	I1204 15:42:18.702129    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d833df5c3"
	I1204 15:42:18.715660    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:42:18.715672    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:42:21.230080    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:42:22.085491   10206 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.004297 seconds
	I1204 15:42:22.085579   10206 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1204 15:42:22.091230   10206 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1204 15:42:22.602866   10206 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1204 15:42:22.603024   10206 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-377000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1204 15:42:23.110469   10206 kubeadm.go:310] [bootstrap-token] Using token: 1dn43k.o5d3nczgwbr8kvhs
	I1204 15:42:23.116508   10206 out.go:235]   - Configuring RBAC rules ...
	I1204 15:42:23.116632   10206 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1204 15:42:23.116715   10206 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1204 15:42:23.125883   10206 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1204 15:42:23.127648   10206 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1204 15:42:23.129122   10206 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1204 15:42:23.130499   10206 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1204 15:42:23.135736   10206 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1204 15:42:23.309371   10206 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1204 15:42:23.516170   10206 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1204 15:42:23.516723   10206 kubeadm.go:310] 
	I1204 15:42:23.516759   10206 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1204 15:42:23.516763   10206 kubeadm.go:310] 
	I1204 15:42:23.516809   10206 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1204 15:42:23.516816   10206 kubeadm.go:310] 
	I1204 15:42:23.516830   10206 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1204 15:42:23.516870   10206 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1204 15:42:23.516899   10206 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1204 15:42:23.516902   10206 kubeadm.go:310] 
	I1204 15:42:23.516935   10206 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1204 15:42:23.516971   10206 kubeadm.go:310] 
	I1204 15:42:23.516999   10206 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1204 15:42:23.517003   10206 kubeadm.go:310] 
	I1204 15:42:23.517050   10206 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1204 15:42:23.517092   10206 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1204 15:42:23.517133   10206 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1204 15:42:23.517136   10206 kubeadm.go:310] 
	I1204 15:42:23.517232   10206 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1204 15:42:23.517335   10206 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1204 15:42:23.517371   10206 kubeadm.go:310] 
	I1204 15:42:23.517428   10206 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1dn43k.o5d3nczgwbr8kvhs \
	I1204 15:42:23.517541   10206 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9ed783fc6ac587ac5303da44420d8c41896e6ac9083929196f4ee227216cf3a5 \
	I1204 15:42:23.517555   10206 kubeadm.go:310] 	--control-plane 
	I1204 15:42:23.517558   10206 kubeadm.go:310] 
	I1204 15:42:23.517612   10206 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1204 15:42:23.517617   10206 kubeadm.go:310] 
	I1204 15:42:23.517673   10206 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1dn43k.o5d3nczgwbr8kvhs \
	I1204 15:42:23.517733   10206 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9ed783fc6ac587ac5303da44420d8c41896e6ac9083929196f4ee227216cf3a5 
	I1204 15:42:23.517805   10206 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 15:42:23.517818   10206 cni.go:84] Creating CNI manager for ""
	I1204 15:42:23.517828   10206 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 15:42:23.524511   10206 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 15:42:23.527577   10206 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 15:42:23.530720   10206 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 15:42:23.536141   10206 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 15:42:23.536204   10206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 15:42:23.536205   10206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-377000 minikube.k8s.io/updated_at=2024_12_04T15_42_23_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d minikube.k8s.io/name=stopped-upgrade-377000 minikube.k8s.io/primary=true
	I1204 15:42:23.540567   10206 ops.go:34] apiserver oom_adj: -16
	I1204 15:42:23.585745   10206 kubeadm.go:1113] duration metric: took 49.59775ms to wait for elevateKubeSystemPrivileges
	I1204 15:42:23.585761   10206 kubeadm.go:394] duration metric: took 4m12.726966667s to StartCluster
	I1204 15:42:23.585772   10206 settings.go:142] acquiring lock: {Name:mkdd110867a4c47f742f3f13d7f418d838150f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:42:23.585873   10206 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/20045-6982/kubeconfig
	I1204 15:42:23.586285   10206 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-6982/kubeconfig: {Name:mk101d59bd39dad79cc42c692d70ed55e90c94da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:42:23.586508   10206 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:42:23.586521   10206 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1204 15:42:23.586556   10206 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-377000"
	I1204 15:42:23.586565   10206 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-377000"
	W1204 15:42:23.586568   10206 addons.go:243] addon storage-provisioner should already be in state true
	I1204 15:42:23.586578   10206 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-377000"
	I1204 15:42:23.586588   10206 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-377000"
	I1204 15:42:23.586580   10206 host.go:66] Checking if "stopped-upgrade-377000" exists ...
	I1204 15:42:23.586627   10206 config.go:182] Loaded profile config "stopped-upgrade-377000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1204 15:42:23.590570   10206 out.go:177] * Verifying Kubernetes components...
	I1204 15:42:23.591265   10206 kapi.go:59] client config for stopped-upgrade-377000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/stopped-upgrade-377000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/stopped-upgrade-377000/client.key", CAFile:"/Users/jenkins/minikube-integration/20045-6982/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10435f6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1204 15:42:23.594923   10206 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-377000"
	W1204 15:42:23.594929   10206 addons.go:243] addon default-storageclass should already be in state true
	I1204 15:42:23.594937   10206 host.go:66] Checking if "stopped-upgrade-377000" exists ...
	I1204 15:42:23.595487   10206 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1204 15:42:23.595492   10206 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1204 15:42:23.595498   10206 sshutil.go:53] new ssh client: &{IP:localhost Port:61799 SSHKeyPath:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/stopped-upgrade-377000/id_rsa Username:docker}
	I1204 15:42:23.598548   10206 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 15:42:23.601539   10206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:42:23.605571   10206 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 15:42:23.605578   10206 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1204 15:42:23.605584   10206 sshutil.go:53] new ssh client: &{IP:localhost Port:61799 SSHKeyPath:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/stopped-upgrade-377000/id_rsa Username:docker}
	I1204 15:42:23.693462   10206 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 15:42:23.698778   10206 api_server.go:52] waiting for apiserver process to appear ...
	I1204 15:42:23.698836   10206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 15:42:23.703210   10206 api_server.go:72] duration metric: took 116.691042ms to wait for apiserver process to appear ...
	I1204 15:42:23.703218   10206 api_server.go:88] waiting for apiserver healthz status ...
	I1204 15:42:23.703224   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:42:23.744942   10206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1204 15:42:23.765114   10206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 15:42:24.095247   10206 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1204 15:42:24.095260   10206 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1204 15:42:26.232493    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:42:26.232615    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:42:26.245330    9926 logs.go:282] 1 containers: [f3328f94ed0d]
	I1204 15:42:26.245415    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:42:26.256678    9926 logs.go:282] 1 containers: [17b0ed658f6c]
	I1204 15:42:26.256768    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:42:26.267542    9926 logs.go:282] 4 containers: [11d10ac44548 722dc28f5a86 0a3178099d31 c8ddfa007847]
	I1204 15:42:26.267625    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:42:26.278363    9926 logs.go:282] 1 containers: [a82022fd2242]
	I1204 15:42:26.278436    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:42:26.293610    9926 logs.go:282] 1 containers: [8f7d833df5c3]
	I1204 15:42:26.293687    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:42:26.304329    9926 logs.go:282] 1 containers: [bdf1070f199f]
	I1204 15:42:26.304409    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:42:26.314739    9926 logs.go:282] 0 containers: []
	W1204 15:42:26.314752    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:42:26.314819    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:42:26.325776    9926 logs.go:282] 1 containers: [bdb47901ddb7]
	I1204 15:42:26.325792    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:42:26.325797    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:42:26.330349    9926 logs.go:123] Gathering logs for coredns [0a3178099d31] ...
	I1204 15:42:26.330357    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a3178099d31"
	I1204 15:42:26.342155    9926 logs.go:123] Gathering logs for coredns [c8ddfa007847] ...
	I1204 15:42:26.342167    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8ddfa007847"
	I1204 15:42:26.355104    9926 logs.go:123] Gathering logs for kube-scheduler [a82022fd2242] ...
	I1204 15:42:26.355115    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a82022fd2242"
	I1204 15:42:26.369951    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:42:26.369961    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:42:26.381893    9926 logs.go:123] Gathering logs for etcd [17b0ed658f6c] ...
	I1204 15:42:26.381903    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b0ed658f6c"
	I1204 15:42:26.395469    9926 logs.go:123] Gathering logs for coredns [11d10ac44548] ...
	I1204 15:42:26.395480    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d10ac44548"
	I1204 15:42:26.407252    9926 logs.go:123] Gathering logs for kube-proxy [8f7d833df5c3] ...
	I1204 15:42:26.407262    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d833df5c3"
	I1204 15:42:26.418765    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:42:26.418776    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:42:26.454668    9926 logs.go:123] Gathering logs for storage-provisioner [bdb47901ddb7] ...
	I1204 15:42:26.454682    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb47901ddb7"
	I1204 15:42:26.466699    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:42:26.466712    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:42:26.503053    9926 logs.go:123] Gathering logs for kube-apiserver [f3328f94ed0d] ...
	I1204 15:42:26.503064    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3328f94ed0d"
	I1204 15:42:26.517412    9926 logs.go:123] Gathering logs for coredns [722dc28f5a86] ...
	I1204 15:42:26.517426    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 722dc28f5a86"
	I1204 15:42:26.533464    9926 logs.go:123] Gathering logs for kube-controller-manager [bdf1070f199f] ...
	I1204 15:42:26.533475    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdf1070f199f"
	I1204 15:42:26.551880    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:42:26.551892    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:42:28.704701   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:42:28.704758   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:42:29.078589    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:42:33.705419   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:42:33.705439   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:42:34.081017    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:42:34.081133    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:42:34.091751    9926 logs.go:282] 1 containers: [f3328f94ed0d]
	I1204 15:42:34.091834    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:42:34.106224    9926 logs.go:282] 1 containers: [17b0ed658f6c]
	I1204 15:42:34.106296    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:42:34.116847    9926 logs.go:282] 4 containers: [11d10ac44548 722dc28f5a86 0a3178099d31 c8ddfa007847]
	I1204 15:42:34.116917    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:42:34.128733    9926 logs.go:282] 1 containers: [a82022fd2242]
	I1204 15:42:34.128811    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:42:34.139682    9926 logs.go:282] 1 containers: [8f7d833df5c3]
	I1204 15:42:34.139759    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:42:34.150937    9926 logs.go:282] 1 containers: [bdf1070f199f]
	I1204 15:42:34.151007    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:42:34.161129    9926 logs.go:282] 0 containers: []
	W1204 15:42:34.161142    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:42:34.161216    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:42:34.171526    9926 logs.go:282] 1 containers: [bdb47901ddb7]
	I1204 15:42:34.171544    9926 logs.go:123] Gathering logs for kube-controller-manager [bdf1070f199f] ...
	I1204 15:42:34.171550    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdf1070f199f"
	I1204 15:42:34.190145    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:42:34.190159    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:42:34.202454    9926 logs.go:123] Gathering logs for kube-apiserver [f3328f94ed0d] ...
	I1204 15:42:34.202470    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3328f94ed0d"
	I1204 15:42:34.216602    9926 logs.go:123] Gathering logs for coredns [c8ddfa007847] ...
	I1204 15:42:34.216614    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8ddfa007847"
	I1204 15:42:34.230453    9926 logs.go:123] Gathering logs for storage-provisioner [bdb47901ddb7] ...
	I1204 15:42:34.230465    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb47901ddb7"
	I1204 15:42:34.242363    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:42:34.242373    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:42:34.246826    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:42:34.246837    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:42:34.283344    9926 logs.go:123] Gathering logs for etcd [17b0ed658f6c] ...
	I1204 15:42:34.283354    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b0ed658f6c"
	I1204 15:42:34.299456    9926 logs.go:123] Gathering logs for coredns [722dc28f5a86] ...
	I1204 15:42:34.299467    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 722dc28f5a86"
	I1204 15:42:34.311356    9926 logs.go:123] Gathering logs for kube-scheduler [a82022fd2242] ...
	I1204 15:42:34.311368    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a82022fd2242"
	I1204 15:42:34.327141    9926 logs.go:123] Gathering logs for kube-proxy [8f7d833df5c3] ...
	I1204 15:42:34.327154    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d833df5c3"
	I1204 15:42:34.338921    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:42:34.338931    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:42:34.362374    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:42:34.362382    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:42:34.399625    9926 logs.go:123] Gathering logs for coredns [0a3178099d31] ...
	I1204 15:42:34.399640    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a3178099d31"
	I1204 15:42:34.411489    9926 logs.go:123] Gathering logs for coredns [11d10ac44548] ...
	I1204 15:42:34.411502    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d10ac44548"
	I1204 15:42:38.705711   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:42:38.705738   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:42:36.925046    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:42:43.706114   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:42:43.706173   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:42:41.927311    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:42:41.927560    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:42:41.951099    9926 logs.go:282] 1 containers: [f3328f94ed0d]
	I1204 15:42:41.951218    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:42:41.966378    9926 logs.go:282] 1 containers: [17b0ed658f6c]
	I1204 15:42:41.966469    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:42:41.979318    9926 logs.go:282] 4 containers: [11d10ac44548 722dc28f5a86 0a3178099d31 c8ddfa007847]
	I1204 15:42:41.979406    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:42:41.995352    9926 logs.go:282] 1 containers: [a82022fd2242]
	I1204 15:42:41.995433    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:42:42.010160    9926 logs.go:282] 1 containers: [8f7d833df5c3]
	I1204 15:42:42.010232    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:42:42.020826    9926 logs.go:282] 1 containers: [bdf1070f199f]
	I1204 15:42:42.020903    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:42:42.035457    9926 logs.go:282] 0 containers: []
	W1204 15:42:42.035469    9926 logs.go:284] No container was found matching "kindnet"
	I1204 15:42:42.035540    9926 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:42:42.045989    9926 logs.go:282] 1 containers: [bdb47901ddb7]
	I1204 15:42:42.046006    9926 logs.go:123] Gathering logs for kube-apiserver [f3328f94ed0d] ...
	I1204 15:42:42.046011    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f3328f94ed0d"
	I1204 15:42:42.062128    9926 logs.go:123] Gathering logs for coredns [0a3178099d31] ...
	I1204 15:42:42.062142    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a3178099d31"
	I1204 15:42:42.077367    9926 logs.go:123] Gathering logs for coredns [c8ddfa007847] ...
	I1204 15:42:42.077381    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8ddfa007847"
	I1204 15:42:42.088614    9926 logs.go:123] Gathering logs for storage-provisioner [bdb47901ddb7] ...
	I1204 15:42:42.088626    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdb47901ddb7"
	I1204 15:42:42.100066    9926 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:42:42.100080    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:42:42.134423    9926 logs.go:123] Gathering logs for coredns [11d10ac44548] ...
	I1204 15:42:42.134434    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 11d10ac44548"
	I1204 15:42:42.145785    9926 logs.go:123] Gathering logs for kube-scheduler [a82022fd2242] ...
	I1204 15:42:42.145797    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a82022fd2242"
	I1204 15:42:42.160841    9926 logs.go:123] Gathering logs for Docker ...
	I1204 15:42:42.160854    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:42:42.183575    9926 logs.go:123] Gathering logs for container status ...
	I1204 15:42:42.183584    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:42:42.196190    9926 logs.go:123] Gathering logs for kubelet ...
	I1204 15:42:42.196200    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:42:42.232867    9926 logs.go:123] Gathering logs for kube-controller-manager [bdf1070f199f] ...
	I1204 15:42:42.232875    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bdf1070f199f"
	I1204 15:42:42.250372    9926 logs.go:123] Gathering logs for coredns [722dc28f5a86] ...
	I1204 15:42:42.250385    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 722dc28f5a86"
	I1204 15:42:42.266143    9926 logs.go:123] Gathering logs for etcd [17b0ed658f6c] ...
	I1204 15:42:42.266153    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 17b0ed658f6c"
	I1204 15:42:42.280248    9926 logs.go:123] Gathering logs for kube-proxy [8f7d833df5c3] ...
	I1204 15:42:42.280259    9926 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8f7d833df5c3"
	I1204 15:42:42.293102    9926 logs.go:123] Gathering logs for dmesg ...
	I1204 15:42:42.293113    9926 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:42:44.798417    9926 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:42:49.800735    9926 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:42:49.805870    9926 out.go:201] 
	W1204 15:42:49.809901    9926 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1204 15:42:49.809908    9926 out.go:270] * 
	W1204 15:42:49.810344    9926 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 15:42:49.823806    9926 out.go:201] 
	I1204 15:42:48.706985   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:42:48.707019   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:42:53.707714   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:42:53.707783   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1204 15:42:54.098004   10206 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1204 15:42:54.103024   10206 out.go:177] * Enabled addons: storage-provisioner
	I1204 15:42:54.109883   10206 addons.go:510] duration metric: took 30.523082916s for enable addons: enabled=[storage-provisioner]
	I1204 15:42:58.708650   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:42:58.708681   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Wed 2024-12-04 23:33:59 UTC, ends at Wed 2024-12-04 23:43:05 UTC. --
	Dec 04 23:42:50 running-upgrade-084000 cri-dockerd[3027]: time="2024-12-04T23:42:50Z" level=error msg="ContainerStats resp: {<nil> }"
	Dec 04 23:42:50 running-upgrade-084000 cri-dockerd[3027]: time="2024-12-04T23:42:50Z" level=error msg="Error response from daemon: No such container: 0a3178099d3189d7095d85fddee0236f758a884c40ec37f4bf4017b63b30a1ea Failed to get stats from container 0a3178099d3189d7095d85fddee0236f758a884c40ec37f4bf4017b63b30a1ea"
	Dec 04 23:42:50 running-upgrade-084000 cri-dockerd[3027]: time="2024-12-04T23:42:50Z" level=error msg="ContainerStats resp: {<nil> }"
	Dec 04 23:42:50 running-upgrade-084000 cri-dockerd[3027]: time="2024-12-04T23:42:50Z" level=error msg="Error response from daemon: No such container: c8ddfa0078478d3e2aa449a7ea65aaa7024bb6ca7bfa1e258abe07b254d9ce2a Failed to get stats from container c8ddfa0078478d3e2aa449a7ea65aaa7024bb6ca7bfa1e258abe07b254d9ce2a"
	Dec 04 23:42:51 running-upgrade-084000 cri-dockerd[3027]: time="2024-12-04T23:42:51Z" level=error msg="ContainerStats resp: {0x40009b2d40 linux}"
	Dec 04 23:42:51 running-upgrade-084000 cri-dockerd[3027]: time="2024-12-04T23:42:51Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Dec 04 23:42:52 running-upgrade-084000 cri-dockerd[3027]: time="2024-12-04T23:42:52Z" level=error msg="ContainerStats resp: {0x40006c7980 linux}"
	Dec 04 23:42:52 running-upgrade-084000 cri-dockerd[3027]: time="2024-12-04T23:42:52Z" level=error msg="ContainerStats resp: {0x40004e6900 linux}"
	Dec 04 23:42:52 running-upgrade-084000 cri-dockerd[3027]: time="2024-12-04T23:42:52Z" level=error msg="ContainerStats resp: {0x4000840140 linux}"
	Dec 04 23:42:52 running-upgrade-084000 cri-dockerd[3027]: time="2024-12-04T23:42:52Z" level=error msg="ContainerStats resp: {0x4000840400 linux}"
	Dec 04 23:42:52 running-upgrade-084000 cri-dockerd[3027]: time="2024-12-04T23:42:52Z" level=error msg="ContainerStats resp: {0x40004e7840 linux}"
	Dec 04 23:42:52 running-upgrade-084000 cri-dockerd[3027]: time="2024-12-04T23:42:52Z" level=error msg="ContainerStats resp: {0x4000840f40 linux}"
	Dec 04 23:42:52 running-upgrade-084000 cri-dockerd[3027]: time="2024-12-04T23:42:52Z" level=error msg="ContainerStats resp: {0x4000841440 linux}"
	Dec 04 23:42:56 running-upgrade-084000 cri-dockerd[3027]: time="2024-12-04T23:42:56Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Dec 04 23:43:01 running-upgrade-084000 cri-dockerd[3027]: time="2024-12-04T23:43:01Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Dec 04 23:43:02 running-upgrade-084000 cri-dockerd[3027]: time="2024-12-04T23:43:02Z" level=error msg="ContainerStats resp: {0x40009b26c0 linux}"
	Dec 04 23:43:02 running-upgrade-084000 cri-dockerd[3027]: time="2024-12-04T23:43:02Z" level=error msg="ContainerStats resp: {0x40009b2800 linux}"
	Dec 04 23:43:03 running-upgrade-084000 cri-dockerd[3027]: time="2024-12-04T23:43:03Z" level=error msg="ContainerStats resp: {0x400059cf80 linux}"
	Dec 04 23:43:04 running-upgrade-084000 cri-dockerd[3027]: time="2024-12-04T23:43:04Z" level=error msg="ContainerStats resp: {0x4000770340 linux}"
	Dec 04 23:43:04 running-upgrade-084000 cri-dockerd[3027]: time="2024-12-04T23:43:04Z" level=error msg="ContainerStats resp: {0x4000934980 linux}"
	Dec 04 23:43:04 running-upgrade-084000 cri-dockerd[3027]: time="2024-12-04T23:43:04Z" level=error msg="ContainerStats resp: {0x40009341c0 linux}"
	Dec 04 23:43:04 running-upgrade-084000 cri-dockerd[3027]: time="2024-12-04T23:43:04Z" level=error msg="ContainerStats resp: {0x4000770640 linux}"
	Dec 04 23:43:04 running-upgrade-084000 cri-dockerd[3027]: time="2024-12-04T23:43:04Z" level=error msg="ContainerStats resp: {0x4000770e80 linux}"
	Dec 04 23:43:04 running-upgrade-084000 cri-dockerd[3027]: time="2024-12-04T23:43:04Z" level=error msg="ContainerStats resp: {0x4000935040 linux}"
	Dec 04 23:43:04 running-upgrade-084000 cri-dockerd[3027]: time="2024-12-04T23:43:04Z" level=error msg="ContainerStats resp: {0x40007717c0 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	97254bdd8f133       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   63ff6980efe1c
	96577b5e04736       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   d69f6098663e8
	11d10ac445489       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   d69f6098663e8
	722dc28f5a86a       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   63ff6980efe1c
	bdb47901ddb72       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   a95f296206ee7
	8f7d833df5c3b       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   e3db66a718888
	bdf1070f199f3       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   e7c597c5185b0
	17b0ed658f6c7       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   8e3fafeed8213
	a82022fd22426       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   9ef0c9b040e22
	f3328f94ed0d6       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   a88c6bc394c55
	
	
	==> coredns [11d10ac44548] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4999393043532388302.7849441412751245813. HINFO: read udp 10.244.0.3:45182->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4999393043532388302.7849441412751245813. HINFO: read udp 10.244.0.3:43249->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4999393043532388302.7849441412751245813. HINFO: read udp 10.244.0.3:48896->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4999393043532388302.7849441412751245813. HINFO: read udp 10.244.0.3:33011->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4999393043532388302.7849441412751245813. HINFO: read udp 10.244.0.3:52292->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4999393043532388302.7849441412751245813. HINFO: read udp 10.244.0.3:39053->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4999393043532388302.7849441412751245813. HINFO: read udp 10.244.0.3:36078->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4999393043532388302.7849441412751245813. HINFO: read udp 10.244.0.3:50078->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4999393043532388302.7849441412751245813. HINFO: read udp 10.244.0.3:40145->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4999393043532388302.7849441412751245813. HINFO: read udp 10.244.0.3:52619->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [722dc28f5a86] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8891321464127364.4017413729623870957. HINFO: read udp 10.244.0.2:42324->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8891321464127364.4017413729623870957. HINFO: read udp 10.244.0.2:51341->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8891321464127364.4017413729623870957. HINFO: read udp 10.244.0.2:33718->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8891321464127364.4017413729623870957. HINFO: read udp 10.244.0.2:41098->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8891321464127364.4017413729623870957. HINFO: read udp 10.244.0.2:42926->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8891321464127364.4017413729623870957. HINFO: read udp 10.244.0.2:42937->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8891321464127364.4017413729623870957. HINFO: read udp 10.244.0.2:36165->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8891321464127364.4017413729623870957. HINFO: read udp 10.244.0.2:54230->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8891321464127364.4017413729623870957. HINFO: read udp 10.244.0.2:46508->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8891321464127364.4017413729623870957. HINFO: read udp 10.244.0.2:60142->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [96577b5e0473] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6062207936328358566.7988855314094530041. HINFO: read udp 10.244.0.3:40197->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6062207936328358566.7988855314094530041. HINFO: read udp 10.244.0.3:50166->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6062207936328358566.7988855314094530041. HINFO: read udp 10.244.0.3:60233->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6062207936328358566.7988855314094530041. HINFO: read udp 10.244.0.3:41881->10.0.2.3:53: i/o timeout
	
	
	==> coredns [97254bdd8f13] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 1700901048392397987.1091665135557201709. HINFO: read udp 10.244.0.2:52095->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1700901048392397987.1091665135557201709. HINFO: read udp 10.244.0.2:41722->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1700901048392397987.1091665135557201709. HINFO: read udp 10.244.0.2:43415->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-084000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-084000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d
	                    minikube.k8s.io/name=running-upgrade-084000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_04T15_38_48_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 23:38:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-084000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Dec 2024 23:43:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Dec 2024 23:38:48 +0000   Wed, 04 Dec 2024 23:38:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Dec 2024 23:38:48 +0000   Wed, 04 Dec 2024 23:38:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Dec 2024 23:38:48 +0000   Wed, 04 Dec 2024 23:38:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Dec 2024 23:38:48 +0000   Wed, 04 Dec 2024 23:38:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-084000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 40f4dcb4f5df4a9f85a98db8b64d1b7d
	  System UUID:                40f4dcb4f5df4a9f85a98db8b64d1b7d
	  Boot ID:                    50e8e5ab-1be6-495f-9bb5-f8b04c4f99de
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-ghbr5                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m5s
	  kube-system                 coredns-6d4b75cb6d-nf9j2                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m5s
	  kube-system                 etcd-running-upgrade-084000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m18s
	  kube-system                 kube-apiserver-running-upgrade-084000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-controller-manager-running-upgrade-084000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 kube-proxy-zgv42                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 kube-scheduler-running-upgrade-084000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m3s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-084000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-084000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m23s (x3 over 4m23s)  kubelet          Node running-upgrade-084000 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  4m18s                  kubelet          Node running-upgrade-084000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  4m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    4m18s                  kubelet          Node running-upgrade-084000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m18s                  kubelet          Node running-upgrade-084000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m18s                  kubelet          Node running-upgrade-084000 status is now: NodeReady
	  Normal  Starting                 4m18s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m6s                   node-controller  Node running-upgrade-084000 event: Registered Node running-upgrade-084000 in Controller
	
	
	==> dmesg <==
	[  +1.765077] systemd-fstab-generator[875]: Ignoring "noauto" for root device
	[  +0.056681] systemd-fstab-generator[886]: Ignoring "noauto" for root device
	[  +0.068292] systemd-fstab-generator[897]: Ignoring "noauto" for root device
	[  +1.137963] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.070221] systemd-fstab-generator[1047]: Ignoring "noauto" for root device
	[  +0.058229] systemd-fstab-generator[1058]: Ignoring "noauto" for root device
	[  +2.391606] systemd-fstab-generator[1284]: Ignoring "noauto" for root device
	[  +8.643949] systemd-fstab-generator[1922]: Ignoring "noauto" for root device
	[  +2.477334] systemd-fstab-generator[2187]: Ignoring "noauto" for root device
	[  +0.145171] systemd-fstab-generator[2224]: Ignoring "noauto" for root device
	[  +0.075745] systemd-fstab-generator[2237]: Ignoring "noauto" for root device
	[  +0.085354] systemd-fstab-generator[2250]: Ignoring "noauto" for root device
	[  +2.765252] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.188691] systemd-fstab-generator[2983]: Ignoring "noauto" for root device
	[  +0.066203] systemd-fstab-generator[2995]: Ignoring "noauto" for root device
	[  +0.071050] systemd-fstab-generator[3006]: Ignoring "noauto" for root device
	[  +0.076938] systemd-fstab-generator[3020]: Ignoring "noauto" for root device
	[  +2.312881] systemd-fstab-generator[3172]: Ignoring "noauto" for root device
	[  +3.343474] systemd-fstab-generator[3549]: Ignoring "noauto" for root device
	[  +1.269517] systemd-fstab-generator[3844]: Ignoring "noauto" for root device
	[ +17.976118] kauditd_printk_skb: 68 callbacks suppressed
	[Dec 4 23:38] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.192145] systemd-fstab-generator[11107]: Ignoring "noauto" for root device
	[  +5.614417] systemd-fstab-generator[11700]: Ignoring "noauto" for root device
	[  +0.462848] systemd-fstab-generator[11834]: Ignoring "noauto" for root device
	
	
	==> etcd [17b0ed658f6c] <==
	{"level":"info","ts":"2024-12-04T23:38:44.425Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-04T23:38:44.429Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-04T23:38:44.429Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-12-04T23:38:44.429Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-12-04T23:38:44.429Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-12-04T23:38:44.430Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-12-04T23:38:44.430Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-12-04T23:38:44.553Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-12-04T23:38:44.553Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-12-04T23:38:44.553Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-12-04T23:38:44.553Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-12-04T23:38:44.553Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-12-04T23:38:44.553Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-12-04T23:38:44.553Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-12-04T23:38:44.553Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-04T23:38:44.564Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-04T23:38:44.564Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-04T23:38:44.564Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-04T23:38:44.564Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-084000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-04T23:38:44.564Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-04T23:38:44.565Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-12-04T23:38:44.565Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-04T23:38:44.565Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-04T23:38:44.569Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-04T23:38:44.569Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 23:43:06 up 9 min,  0 users,  load average: 0.25, 0.28, 0.18
	Linux running-upgrade-084000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [f3328f94ed0d] <==
	I1204 23:38:45.997487       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1204 23:38:45.997509       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1204 23:38:45.997516       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1204 23:38:45.997707       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1204 23:38:45.998086       1 cache.go:39] Caches are synced for autoregister controller
	I1204 23:38:46.017928       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1204 23:38:46.039404       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1204 23:38:46.744100       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1204 23:38:46.904048       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1204 23:38:46.907434       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1204 23:38:46.907476       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1204 23:38:47.038729       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1204 23:38:47.053803       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1204 23:38:47.068930       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W1204 23:38:47.070860       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I1204 23:38:47.071280       1 controller.go:611] quota admission added evaluator for: endpoints
	I1204 23:38:47.072524       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1204 23:38:48.044720       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1204 23:38:48.651941       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1204 23:38:48.660926       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I1204 23:38:48.696065       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1204 23:38:48.704259       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1204 23:39:01.248699       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I1204 23:39:01.800293       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I1204 23:39:02.348012       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [bdf1070f199f] <==
	I1204 23:39:00.897653       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W1204 23:39:00.897695       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-084000. Assuming now as a timestamp.
	I1204 23:39:00.897718       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I1204 23:39:00.897798       1 shared_informer.go:262] Caches are synced for PV protection
	I1204 23:39:00.897813       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1204 23:39:00.897830       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I1204 23:39:00.897952       1 event.go:294] "Event occurred" object="running-upgrade-084000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-084000 event: Registered Node running-upgrade-084000 in Controller"
	I1204 23:39:00.899300       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I1204 23:39:00.899348       1 shared_informer.go:262] Caches are synced for crt configmap
	I1204 23:39:00.922055       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I1204 23:39:00.932552       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I1204 23:39:00.940047       1 shared_informer.go:262] Caches are synced for disruption
	I1204 23:39:00.940053       1 disruption.go:371] Sending events to api server.
	I1204 23:39:00.999401       1 shared_informer.go:262] Caches are synced for deployment
	I1204 23:39:01.049365       1 shared_informer.go:262] Caches are synced for persistent volume
	I1204 23:39:01.051265       1 shared_informer.go:262] Caches are synced for attach detach
	I1204 23:39:01.063503       1 shared_informer.go:262] Caches are synced for resource quota
	I1204 23:39:01.102004       1 shared_informer.go:262] Caches are synced for resource quota
	I1204 23:39:01.251216       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-zgv42"
	I1204 23:39:01.524930       1 shared_informer.go:262] Caches are synced for garbage collector
	I1204 23:39:01.601283       1 shared_informer.go:262] Caches are synced for garbage collector
	I1204 23:39:01.601367       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1204 23:39:01.801382       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I1204 23:39:01.900593       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-nf9j2"
	I1204 23:39:01.906320       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-ghbr5"
	
	
	==> kube-proxy [8f7d833df5c3] <==
	I1204 23:39:02.334641       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I1204 23:39:02.334676       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I1204 23:39:02.334710       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1204 23:39:02.345770       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1204 23:39:02.345782       1 server_others.go:206] "Using iptables Proxier"
	I1204 23:39:02.345842       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1204 23:39:02.345976       1 server.go:661] "Version info" version="v1.24.1"
	I1204 23:39:02.345985       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1204 23:39:02.346333       1 config.go:317] "Starting service config controller"
	I1204 23:39:02.346351       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1204 23:39:02.346364       1 config.go:226] "Starting endpoint slice config controller"
	I1204 23:39:02.346379       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1204 23:39:02.346708       1 config.go:444] "Starting node config controller"
	I1204 23:39:02.346735       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1204 23:39:02.447134       1 shared_informer.go:262] Caches are synced for node config
	I1204 23:39:02.447138       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1204 23:39:02.447143       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [a82022fd2242] <==
	W1204 23:38:45.962256       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1204 23:38:45.962260       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1204 23:38:45.962271       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1204 23:38:45.962275       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1204 23:38:45.962285       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1204 23:38:45.962288       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1204 23:38:45.962313       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1204 23:38:45.962320       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1204 23:38:45.962394       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1204 23:38:45.962401       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1204 23:38:46.782349       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1204 23:38:46.782401       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1204 23:38:46.834341       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1204 23:38:46.834384       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1204 23:38:46.866221       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1204 23:38:46.866294       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1204 23:38:46.892497       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1204 23:38:46.892694       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1204 23:38:46.907659       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1204 23:38:46.907688       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1204 23:38:46.916676       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1204 23:38:46.916775       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1204 23:38:46.970073       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1204 23:38:46.970161       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1204 23:38:50.059825       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-12-04 23:33:59 UTC, ends at Wed 2024-12-04 23:43:06 UTC. --
	Dec 04 23:39:01 running-upgrade-084000 kubelet[11707]: I1204 23:39:01.033291   11707 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d3fb3d00-a0af-45cd-89d6-a19b2fbb00eb-tmp\") pod \"storage-provisioner\" (UID: \"d3fb3d00-a0af-45cd-89d6-a19b2fbb00eb\") " pod="kube-system/storage-provisioner"
	Dec 04 23:39:01 running-upgrade-084000 kubelet[11707]: I1204 23:39:01.033319   11707 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26p6h\" (UniqueName: \"kubernetes.io/projected/d3fb3d00-a0af-45cd-89d6-a19b2fbb00eb-kube-api-access-26p6h\") pod \"storage-provisioner\" (UID: \"d3fb3d00-a0af-45cd-89d6-a19b2fbb00eb\") " pod="kube-system/storage-provisioner"
	Dec 04 23:39:01 running-upgrade-084000 kubelet[11707]: E1204 23:39:01.137338   11707 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 04 23:39:01 running-upgrade-084000 kubelet[11707]: E1204 23:39:01.137407   11707 projected.go:192] Error preparing data for projected volume kube-api-access-26p6h for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Dec 04 23:39:01 running-upgrade-084000 kubelet[11707]: E1204 23:39:01.137447   11707 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/d3fb3d00-a0af-45cd-89d6-a19b2fbb00eb-kube-api-access-26p6h podName:d3fb3d00-a0af-45cd-89d6-a19b2fbb00eb nodeName:}" failed. No retries permitted until 2024-12-04 23:39:01.637433988 +0000 UTC m=+12.995557145 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-26p6h" (UniqueName: "kubernetes.io/projected/d3fb3d00-a0af-45cd-89d6-a19b2fbb00eb-kube-api-access-26p6h") pod "storage-provisioner" (UID: "d3fb3d00-a0af-45cd-89d6-a19b2fbb00eb") : configmap "kube-root-ca.crt" not found
	Dec 04 23:39:01 running-upgrade-084000 kubelet[11707]: I1204 23:39:01.255581   11707 topology_manager.go:200] "Topology Admit Handler"
	Dec 04 23:39:01 running-upgrade-084000 kubelet[11707]: I1204 23:39:01.337271   11707 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e38c22fe-d2c4-4392-b504-608c74c72471-lib-modules\") pod \"kube-proxy-zgv42\" (UID: \"e38c22fe-d2c4-4392-b504-608c74c72471\") " pod="kube-system/kube-proxy-zgv42"
	Dec 04 23:39:01 running-upgrade-084000 kubelet[11707]: I1204 23:39:01.337301   11707 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm2rt\" (UniqueName: \"kubernetes.io/projected/e38c22fe-d2c4-4392-b504-608c74c72471-kube-api-access-pm2rt\") pod \"kube-proxy-zgv42\" (UID: \"e38c22fe-d2c4-4392-b504-608c74c72471\") " pod="kube-system/kube-proxy-zgv42"
	Dec 04 23:39:01 running-upgrade-084000 kubelet[11707]: I1204 23:39:01.337312   11707 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e38c22fe-d2c4-4392-b504-608c74c72471-xtables-lock\") pod \"kube-proxy-zgv42\" (UID: \"e38c22fe-d2c4-4392-b504-608c74c72471\") " pod="kube-system/kube-proxy-zgv42"
	Dec 04 23:39:01 running-upgrade-084000 kubelet[11707]: I1204 23:39:01.337334   11707 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e38c22fe-d2c4-4392-b504-608c74c72471-kube-proxy\") pod \"kube-proxy-zgv42\" (UID: \"e38c22fe-d2c4-4392-b504-608c74c72471\") " pod="kube-system/kube-proxy-zgv42"
	Dec 04 23:39:01 running-upgrade-084000 kubelet[11707]: E1204 23:39:01.440281   11707 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 04 23:39:01 running-upgrade-084000 kubelet[11707]: E1204 23:39:01.440303   11707 projected.go:192] Error preparing data for projected volume kube-api-access-pm2rt for pod kube-system/kube-proxy-zgv42: configmap "kube-root-ca.crt" not found
	Dec 04 23:39:01 running-upgrade-084000 kubelet[11707]: E1204 23:39:01.440328   11707 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/e38c22fe-d2c4-4392-b504-608c74c72471-kube-api-access-pm2rt podName:e38c22fe-d2c4-4392-b504-608c74c72471 nodeName:}" failed. No retries permitted until 2024-12-04 23:39:01.940318658 +0000 UTC m=+13.298441857 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pm2rt" (UniqueName: "kubernetes.io/projected/e38c22fe-d2c4-4392-b504-608c74c72471-kube-api-access-pm2rt") pod "kube-proxy-zgv42" (UID: "e38c22fe-d2c4-4392-b504-608c74c72471") : configmap "kube-root-ca.crt" not found
	Dec 04 23:39:01 running-upgrade-084000 kubelet[11707]: E1204 23:39:01.638747   11707 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Dec 04 23:39:01 running-upgrade-084000 kubelet[11707]: E1204 23:39:01.638773   11707 projected.go:192] Error preparing data for projected volume kube-api-access-26p6h for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Dec 04 23:39:01 running-upgrade-084000 kubelet[11707]: E1204 23:39:01.638802   11707 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/d3fb3d00-a0af-45cd-89d6-a19b2fbb00eb-kube-api-access-26p6h podName:d3fb3d00-a0af-45cd-89d6-a19b2fbb00eb nodeName:}" failed. No retries permitted until 2024-12-04 23:39:02.638792274 +0000 UTC m=+13.996915473 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-26p6h" (UniqueName: "kubernetes.io/projected/d3fb3d00-a0af-45cd-89d6-a19b2fbb00eb-kube-api-access-26p6h") pod "storage-provisioner" (UID: "d3fb3d00-a0af-45cd-89d6-a19b2fbb00eb") : configmap "kube-root-ca.crt" not found
	Dec 04 23:39:01 running-upgrade-084000 kubelet[11707]: I1204 23:39:01.905894   11707 topology_manager.go:200] "Topology Admit Handler"
	Dec 04 23:39:01 running-upgrade-084000 kubelet[11707]: I1204 23:39:01.914448   11707 topology_manager.go:200] "Topology Admit Handler"
	Dec 04 23:39:02 running-upgrade-084000 kubelet[11707]: I1204 23:39:02.041008   11707 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7cb8ff4e-d8a0-4ca3-8beb-8cc175bfc37e-config-volume\") pod \"coredns-6d4b75cb6d-nf9j2\" (UID: \"7cb8ff4e-d8a0-4ca3-8beb-8cc175bfc37e\") " pod="kube-system/coredns-6d4b75cb6d-nf9j2"
	Dec 04 23:39:02 running-upgrade-084000 kubelet[11707]: I1204 23:39:02.041084   11707 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7cmq\" (UniqueName: \"kubernetes.io/projected/7474625c-2b12-45ec-8ffd-41be6a363166-kube-api-access-g7cmq\") pod \"coredns-6d4b75cb6d-ghbr5\" (UID: \"7474625c-2b12-45ec-8ffd-41be6a363166\") " pod="kube-system/coredns-6d4b75cb6d-ghbr5"
	Dec 04 23:39:02 running-upgrade-084000 kubelet[11707]: I1204 23:39:02.041105   11707 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7474625c-2b12-45ec-8ffd-41be6a363166-config-volume\") pod \"coredns-6d4b75cb6d-ghbr5\" (UID: \"7474625c-2b12-45ec-8ffd-41be6a363166\") " pod="kube-system/coredns-6d4b75cb6d-ghbr5"
	Dec 04 23:39:02 running-upgrade-084000 kubelet[11707]: I1204 23:39:02.041138   11707 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc6cn\" (UniqueName: \"kubernetes.io/projected/7cb8ff4e-d8a0-4ca3-8beb-8cc175bfc37e-kube-api-access-wc6cn\") pod \"coredns-6d4b75cb6d-nf9j2\" (UID: \"7cb8ff4e-d8a0-4ca3-8beb-8cc175bfc37e\") " pod="kube-system/coredns-6d4b75cb6d-nf9j2"
	Dec 04 23:39:02 running-upgrade-084000 kubelet[11707]: I1204 23:39:02.879495   11707 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="a95f296206ee7c82b4d1c98b9d0ef4f5f8efeb48a26fefb21ab59cb2b08f5b0a"
	Dec 04 23:42:50 running-upgrade-084000 kubelet[11707]: I1204 23:42:50.137434   11707 scope.go:110] "RemoveContainer" containerID="c8ddfa0078478d3e2aa449a7ea65aaa7024bb6ca7bfa1e258abe07b254d9ce2a"
	Dec 04 23:42:50 running-upgrade-084000 kubelet[11707]: I1204 23:42:50.156708   11707 scope.go:110] "RemoveContainer" containerID="0a3178099d3189d7095d85fddee0236f758a884c40ec37f4bf4017b63b30a1ea"
	
	
	==> storage-provisioner [bdb47901ddb7] <==
	I1204 23:39:03.000503       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1204 23:39:03.009183       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1204 23:39:03.009216       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1204 23:39:03.015633       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1204 23:39:03.016309       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9eaf75c3-ca9e-40b6-b716-22d29547f2ef", APIVersion:"v1", ResourceVersion:"373", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-084000_5cc86ae1-b3b0-45f8-9050-439112b5a9a2 became leader
	I1204 23:39:03.016346       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-084000_5cc86ae1-b3b0-45f8-9050-439112b5a9a2!
	I1204 23:39:03.117337       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-084000_5cc86ae1-b3b0-45f8-9050-439112b5a9a2!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-084000 -n running-upgrade-084000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-084000 -n running-upgrade-084000: exit status 2 (15.677031083s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-084000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-084000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-084000
--- FAIL: TestRunningBinaryUpgrade (590.27s)

                                                
                                    
x
+
TestKubernetesUpgrade (17.18s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-989000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-989000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.862494417s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-989000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-989000" primary control-plane node in "kubernetes-upgrade-989000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-989000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:36:31.980371   10099 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:36:31.980537   10099 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:36:31.980543   10099 out.go:358] Setting ErrFile to fd 2...
	I1204 15:36:31.980546   10099 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:36:31.980666   10099 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:36:31.981938   10099 out.go:352] Setting JSON to false
	I1204 15:36:32.000148   10099 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5761,"bootTime":1733349630,"procs":551,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 15:36:32.000220   10099 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:36:32.006164   10099 out.go:177] * [kubernetes-upgrade-989000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 15:36:32.014140   10099 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 15:36:32.014219   10099 notify.go:220] Checking for updates...
	I1204 15:36:32.021860   10099 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	I1204 15:36:32.025017   10099 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 15:36:32.029040   10099 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:36:32.030327   10099 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	I1204 15:36:32.033063   10099 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 15:36:32.036432   10099 config.go:182] Loaded profile config "multinode-093000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:36:32.036506   10099 config.go:182] Loaded profile config "running-upgrade-084000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1204 15:36:32.036559   10099 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:36:32.037998   10099 out.go:177] * Using the qemu2 driver based on user configuration
	I1204 15:36:32.045011   10099 start.go:297] selected driver: qemu2
	I1204 15:36:32.045020   10099 start.go:901] validating driver "qemu2" against <nil>
	I1204 15:36:32.045027   10099 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 15:36:32.047690   10099 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 15:36:32.051009   10099 out.go:177] * Automatically selected the socket_vmnet network
	I1204 15:36:32.054262   10099 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1204 15:36:32.054288   10099 cni.go:84] Creating CNI manager for ""
	I1204 15:36:32.054312   10099 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1204 15:36:32.054350   10099 start.go:340] cluster config:
	{Name:kubernetes-upgrade-989000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-989000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:36:32.059071   10099 iso.go:125] acquiring lock: {Name:mk798ab7885b095925d55c843b0600e7ea181045 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:36:32.069072   10099 out.go:177] * Starting "kubernetes-upgrade-989000" primary control-plane node in "kubernetes-upgrade-989000" cluster
	I1204 15:36:32.072021   10099 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1204 15:36:32.072034   10099 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1204 15:36:32.072039   10099 cache.go:56] Caching tarball of preloaded images
	I1204 15:36:32.072109   10099 preload.go:172] Found /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 15:36:32.072114   10099 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1204 15:36:32.072164   10099 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/kubernetes-upgrade-989000/config.json ...
	I1204 15:36:32.072174   10099 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/kubernetes-upgrade-989000/config.json: {Name:mk00dd448984b837f45a40560f94a45225989cc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:36:32.072445   10099 start.go:360] acquireMachinesLock for kubernetes-upgrade-989000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:36:32.072497   10099 start.go:364] duration metric: took 44.625µs to acquireMachinesLock for "kubernetes-upgrade-989000"
	I1204 15:36:32.072511   10099 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-989000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-989000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:36:32.072536   10099 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 15:36:32.081068   10099 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 15:36:32.106838   10099 start.go:159] libmachine.API.Create for "kubernetes-upgrade-989000" (driver="qemu2")
	I1204 15:36:32.106873   10099 client.go:168] LocalClient.Create starting
	I1204 15:36:32.106962   10099 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem
	I1204 15:36:32.107003   10099 main.go:141] libmachine: Decoding PEM data...
	I1204 15:36:32.107015   10099 main.go:141] libmachine: Parsing certificate...
	I1204 15:36:32.107055   10099 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem
	I1204 15:36:32.107092   10099 main.go:141] libmachine: Decoding PEM data...
	I1204 15:36:32.107100   10099 main.go:141] libmachine: Parsing certificate...
	I1204 15:36:32.107448   10099 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-6982/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 15:36:32.276518   10099 main.go:141] libmachine: Creating SSH key...
	I1204 15:36:32.371703   10099 main.go:141] libmachine: Creating Disk image...
	I1204 15:36:32.371712   10099 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 15:36:32.371945   10099 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kubernetes-upgrade-989000/disk.qcow2.raw /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kubernetes-upgrade-989000/disk.qcow2
	I1204 15:36:32.382608   10099 main.go:141] libmachine: STDOUT: 
	I1204 15:36:32.382623   10099 main.go:141] libmachine: STDERR: 
	I1204 15:36:32.382678   10099 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kubernetes-upgrade-989000/disk.qcow2 +20000M
	I1204 15:36:32.391566   10099 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 15:36:32.391582   10099 main.go:141] libmachine: STDERR: 
	I1204 15:36:32.391595   10099 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kubernetes-upgrade-989000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kubernetes-upgrade-989000/disk.qcow2
	I1204 15:36:32.391601   10099 main.go:141] libmachine: Starting QEMU VM...
	I1204 15:36:32.391616   10099 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:36:32.391650   10099 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kubernetes-upgrade-989000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kubernetes-upgrade-989000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kubernetes-upgrade-989000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:c4:af:cb:73:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kubernetes-upgrade-989000/disk.qcow2
	I1204 15:36:32.393899   10099 main.go:141] libmachine: STDOUT: 
	I1204 15:36:32.393913   10099 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:36:32.393934   10099 client.go:171] duration metric: took 287.052166ms to LocalClient.Create
	I1204 15:36:34.396154   10099 start.go:128] duration metric: took 2.323567041s to createHost
	I1204 15:36:34.396315   10099 start.go:83] releasing machines lock for "kubernetes-upgrade-989000", held for 2.323763s
	W1204 15:36:34.396400   10099 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:36:34.409932   10099 out.go:177] * Deleting "kubernetes-upgrade-989000" in qemu2 ...
	W1204 15:36:34.436282   10099 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:36:34.436307   10099 start.go:729] Will try again in 5 seconds ...
	I1204 15:36:39.438458   10099 start.go:360] acquireMachinesLock for kubernetes-upgrade-989000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:36:39.438693   10099 start.go:364] duration metric: took 203.083µs to acquireMachinesLock for "kubernetes-upgrade-989000"
	I1204 15:36:39.438751   10099 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-989000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-989000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:36:39.438830   10099 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 15:36:39.445206   10099 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 15:36:39.465310   10099 start.go:159] libmachine.API.Create for "kubernetes-upgrade-989000" (driver="qemu2")
	I1204 15:36:39.465344   10099 client.go:168] LocalClient.Create starting
	I1204 15:36:39.465424   10099 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem
	I1204 15:36:39.465478   10099 main.go:141] libmachine: Decoding PEM data...
	I1204 15:36:39.465488   10099 main.go:141] libmachine: Parsing certificate...
	I1204 15:36:39.465526   10099 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem
	I1204 15:36:39.465561   10099 main.go:141] libmachine: Decoding PEM data...
	I1204 15:36:39.465568   10099 main.go:141] libmachine: Parsing certificate...
	I1204 15:36:39.465983   10099 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-6982/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 15:36:39.660002   10099 main.go:141] libmachine: Creating SSH key...
	I1204 15:36:39.734603   10099 main.go:141] libmachine: Creating Disk image...
	I1204 15:36:39.734614   10099 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 15:36:39.734825   10099 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kubernetes-upgrade-989000/disk.qcow2.raw /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kubernetes-upgrade-989000/disk.qcow2
	I1204 15:36:39.745196   10099 main.go:141] libmachine: STDOUT: 
	I1204 15:36:39.745213   10099 main.go:141] libmachine: STDERR: 
	I1204 15:36:39.745294   10099 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kubernetes-upgrade-989000/disk.qcow2 +20000M
	I1204 15:36:39.754166   10099 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 15:36:39.754185   10099 main.go:141] libmachine: STDERR: 
	I1204 15:36:39.754203   10099 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kubernetes-upgrade-989000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kubernetes-upgrade-989000/disk.qcow2
	I1204 15:36:39.754208   10099 main.go:141] libmachine: Starting QEMU VM...
	I1204 15:36:39.754218   10099 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:36:39.754245   10099 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kubernetes-upgrade-989000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kubernetes-upgrade-989000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kubernetes-upgrade-989000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:4c:28:b0:b1:97 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kubernetes-upgrade-989000/disk.qcow2
	I1204 15:36:39.756163   10099 main.go:141] libmachine: STDOUT: 
	I1204 15:36:39.756180   10099 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:36:39.756193   10099 client.go:171] duration metric: took 290.841709ms to LocalClient.Create
	I1204 15:36:41.758414   10099 start.go:128] duration metric: took 2.319531708s to createHost
	I1204 15:36:41.758537   10099 start.go:83] releasing machines lock for "kubernetes-upgrade-989000", held for 2.319794833s
	W1204 15:36:41.758916   10099 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-989000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-989000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:36:41.773642   10099 out.go:201] 
	W1204 15:36:41.777771   10099 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 15:36:41.777805   10099 out.go:270] * 
	* 
	W1204 15:36:41.780807   10099 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 15:36:41.797417   10099 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-989000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-989000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-989000: (1.869945625s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-989000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-989000 status --format={{.Host}}: exit status 7 (55.999917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-989000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-989000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.200001208s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-989000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-989000" primary control-plane node in "kubernetes-upgrade-989000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-989000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-989000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:36:43.771280   10131 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:36:43.771435   10131 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:36:43.771439   10131 out.go:358] Setting ErrFile to fd 2...
	I1204 15:36:43.771441   10131 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:36:43.771590   10131 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:36:43.772742   10131 out.go:352] Setting JSON to false
	I1204 15:36:43.792042   10131 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5773,"bootTime":1733349630,"procs":551,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 15:36:43.792118   10131 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:36:43.797440   10131 out.go:177] * [kubernetes-upgrade-989000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 15:36:43.805443   10131 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 15:36:43.805544   10131 notify.go:220] Checking for updates...
	I1204 15:36:43.813366   10131 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	I1204 15:36:43.817450   10131 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 15:36:43.820361   10131 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:36:43.823362   10131 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	I1204 15:36:43.826418   10131 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 15:36:43.827986   10131 config.go:182] Loaded profile config "kubernetes-upgrade-989000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1204 15:36:43.828254   10131 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:36:43.832397   10131 out.go:177] * Using the qemu2 driver based on existing profile
	I1204 15:36:43.839264   10131 start.go:297] selected driver: qemu2
	I1204 15:36:43.839270   10131 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-989000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-989000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:36:43.839312   10131 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 15:36:43.841968   10131 cni.go:84] Creating CNI manager for ""
	I1204 15:36:43.841998   10131 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 15:36:43.842016   10131 start.go:340] cluster config:
	{Name:kubernetes-upgrade-989000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kubernetes-upgrade-989000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:36:43.846117   10131 iso.go:125] acquiring lock: {Name:mk798ab7885b095925d55c843b0600e7ea181045 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:36:43.854431   10131 out.go:177] * Starting "kubernetes-upgrade-989000" primary control-plane node in "kubernetes-upgrade-989000" cluster
	I1204 15:36:43.858396   10131 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:36:43.858412   10131 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 15:36:43.858418   10131 cache.go:56] Caching tarball of preloaded images
	I1204 15:36:43.858492   10131 preload.go:172] Found /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 15:36:43.858497   10131 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 15:36:43.858555   10131 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/kubernetes-upgrade-989000/config.json ...
	I1204 15:36:43.859019   10131 start.go:360] acquireMachinesLock for kubernetes-upgrade-989000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:36:43.859047   10131 start.go:364] duration metric: took 22.708µs to acquireMachinesLock for "kubernetes-upgrade-989000"
	I1204 15:36:43.859057   10131 start.go:96] Skipping create...Using existing machine configuration
	I1204 15:36:43.859060   10131 fix.go:54] fixHost starting: 
	I1204 15:36:43.859165   10131 fix.go:112] recreateIfNeeded on kubernetes-upgrade-989000: state=Stopped err=<nil>
	W1204 15:36:43.859172   10131 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 15:36:43.866424   10131 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-989000" ...
	I1204 15:36:43.870387   10131 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:36:43.870424   10131 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kubernetes-upgrade-989000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kubernetes-upgrade-989000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kubernetes-upgrade-989000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:4c:28:b0:b1:97 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kubernetes-upgrade-989000/disk.qcow2
	I1204 15:36:43.872457   10131 main.go:141] libmachine: STDOUT: 
	I1204 15:36:43.872474   10131 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:36:43.872502   10131 fix.go:56] duration metric: took 13.438875ms for fixHost
	I1204 15:36:43.872507   10131 start.go:83] releasing machines lock for "kubernetes-upgrade-989000", held for 13.456ms
	W1204 15:36:43.872512   10131 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 15:36:43.872551   10131 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:36:43.872556   10131 start.go:729] Will try again in 5 seconds ...
	I1204 15:36:48.874875   10131 start.go:360] acquireMachinesLock for kubernetes-upgrade-989000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:36:48.875433   10131 start.go:364] duration metric: took 423.083µs to acquireMachinesLock for "kubernetes-upgrade-989000"
	I1204 15:36:48.875646   10131 start.go:96] Skipping create...Using existing machine configuration
	I1204 15:36:48.875669   10131 fix.go:54] fixHost starting: 
	I1204 15:36:48.876386   10131 fix.go:112] recreateIfNeeded on kubernetes-upgrade-989000: state=Stopped err=<nil>
	W1204 15:36:48.876413   10131 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 15:36:48.886124   10131 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-989000" ...
	I1204 15:36:48.890145   10131 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:36:48.890444   10131 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kubernetes-upgrade-989000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kubernetes-upgrade-989000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kubernetes-upgrade-989000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:4c:28:b0:b1:97 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kubernetes-upgrade-989000/disk.qcow2
	I1204 15:36:48.901117   10131 main.go:141] libmachine: STDOUT: 
	I1204 15:36:48.901173   10131 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:36:48.901260   10131 fix.go:56] duration metric: took 25.594666ms for fixHost
	I1204 15:36:48.901278   10131 start.go:83] releasing machines lock for "kubernetes-upgrade-989000", held for 25.776042ms
	W1204 15:36:48.901472   10131 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-989000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-989000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:36:48.910151   10131 out.go:201] 
	W1204 15:36:48.913255   10131 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 15:36:48.913282   10131 out.go:270] * 
	* 
	W1204 15:36:48.915632   10131 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 15:36:48.925093   10131 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-989000 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-989000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-989000 version --output=json: exit status 1 (66.790459ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-989000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-12-04 15:36:49.00819 -0800 PST m=+954.033320626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-989000 -n kubernetes-upgrade-989000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-989000 -n kubernetes-upgrade-989000: exit status 7 (36.456208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-989000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-989000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-989000
--- FAIL: TestKubernetesUpgrade (17.18s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (0.98s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=20045
- KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1912513496/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (0.98s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.14s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=20045
- KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1496803345/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.14s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (574.74s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.4238768832 start -p stopped-upgrade-377000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.4238768832 start -p stopped-upgrade-377000 --memory=2200 --vm-driver=qemu2 : (39.411782083s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.4238768832 -p stopped-upgrade-377000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.4238768832 -p stopped-upgrade-377000 stop: (12.11099425s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-377000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-377000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m43.132858625s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-377000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-377000" primary control-plane node in "stopped-upgrade-377000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-377000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:37:41.779892   10206 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:37:41.780089   10206 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:37:41.780094   10206 out.go:358] Setting ErrFile to fd 2...
	I1204 15:37:41.780096   10206 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:37:41.780250   10206 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:37:41.781433   10206 out.go:352] Setting JSON to false
	I1204 15:37:41.801300   10206 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5831,"bootTime":1733349630,"procs":549,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 15:37:41.801407   10206 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:37:41.805942   10206 out.go:177] * [stopped-upgrade-377000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 15:37:41.816847   10206 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 15:37:41.816851   10206 notify.go:220] Checking for updates...
	I1204 15:37:41.824813   10206 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	I1204 15:37:41.828794   10206 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 15:37:41.832832   10206 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:37:41.835818   10206 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	I1204 15:37:41.839840   10206 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 15:37:41.844087   10206 config.go:182] Loaded profile config "stopped-upgrade-377000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1204 15:37:41.848813   10206 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1204 15:37:41.852814   10206 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:37:41.856791   10206 out.go:177] * Using the qemu2 driver based on existing profile
	I1204 15:37:41.864814   10206 start.go:297] selected driver: qemu2
	I1204 15:37:41.864820   10206 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-377000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:61834 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-377000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1204 15:37:41.864863   10206 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 15:37:41.867855   10206 cni.go:84] Creating CNI manager for ""
	I1204 15:37:41.867885   10206 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 15:37:41.867923   10206 start.go:340] cluster config:
	{Name:stopped-upgrade-377000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:61834 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-377000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1204 15:37:41.867986   10206 iso.go:125] acquiring lock: {Name:mk798ab7885b095925d55c843b0600e7ea181045 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:37:41.876880   10206 out.go:177] * Starting "stopped-upgrade-377000" primary control-plane node in "stopped-upgrade-377000" cluster
	I1204 15:37:41.880840   10206 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1204 15:37:41.880866   10206 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1204 15:37:41.880877   10206 cache.go:56] Caching tarball of preloaded images
	I1204 15:37:41.880957   10206 preload.go:172] Found /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 15:37:41.880964   10206 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1204 15:37:41.881049   10206 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/stopped-upgrade-377000/config.json ...
	I1204 15:37:41.881610   10206 start.go:360] acquireMachinesLock for stopped-upgrade-377000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:37:41.881642   10206 start.go:364] duration metric: took 24.75µs to acquireMachinesLock for "stopped-upgrade-377000"
	I1204 15:37:41.881653   10206 start.go:96] Skipping create...Using existing machine configuration
	I1204 15:37:41.881657   10206 fix.go:54] fixHost starting: 
	I1204 15:37:41.881772   10206 fix.go:112] recreateIfNeeded on stopped-upgrade-377000: state=Stopped err=<nil>
	W1204 15:37:41.881782   10206 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 15:37:41.886815   10206 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-377000" ...
	I1204 15:37:41.894753   10206 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:37:41.894831   10206 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.2/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/stopped-upgrade-377000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/stopped-upgrade-377000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/stopped-upgrade-377000/qemu.pid -nic user,model=virtio,hostfwd=tcp::61799-:22,hostfwd=tcp::61800-:2376,hostname=stopped-upgrade-377000 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/stopped-upgrade-377000/disk.qcow2
	I1204 15:37:41.941994   10206 main.go:141] libmachine: STDOUT: 
	I1204 15:37:41.942020   10206 main.go:141] libmachine: STDERR: 
	I1204 15:37:41.942030   10206 main.go:141] libmachine: Waiting for VM to start (ssh -p 61799 docker@127.0.0.1)...
	I1204 15:38:02.122757   10206 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/stopped-upgrade-377000/config.json ...
	I1204 15:38:02.123007   10206 machine.go:93] provisionDockerMachine start ...
	I1204 15:38:02.123070   10206 main.go:141] libmachine: Using SSH client type: native
	I1204 15:38:02.123196   10206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102902f60] 0x1029057a0 <nil>  [] 0s} localhost 61799 <nil> <nil>}
	I1204 15:38:02.123202   10206 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 15:38:02.188194   10206 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1204 15:38:02.188213   10206 buildroot.go:166] provisioning hostname "stopped-upgrade-377000"
	I1204 15:38:02.188304   10206 main.go:141] libmachine: Using SSH client type: native
	I1204 15:38:02.188436   10206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102902f60] 0x1029057a0 <nil>  [] 0s} localhost 61799 <nil> <nil>}
	I1204 15:38:02.188442   10206 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-377000 && echo "stopped-upgrade-377000" | sudo tee /etc/hostname
	I1204 15:38:02.259418   10206 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-377000
	
	I1204 15:38:02.259489   10206 main.go:141] libmachine: Using SSH client type: native
	I1204 15:38:02.259607   10206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102902f60] 0x1029057a0 <nil>  [] 0s} localhost 61799 <nil> <nil>}
	I1204 15:38:02.259616   10206 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-377000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-377000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-377000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 15:38:02.326616   10206 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 15:38:02.326632   10206 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/20045-6982/.minikube CaCertPath:/Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/20045-6982/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/20045-6982/.minikube}
	I1204 15:38:02.326642   10206 buildroot.go:174] setting up certificates
	I1204 15:38:02.326647   10206 provision.go:84] configureAuth start
	I1204 15:38:02.326656   10206 provision.go:143] copyHostCerts
	I1204 15:38:02.326728   10206 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-6982/.minikube/ca.pem, removing ...
	I1204 15:38:02.326737   10206 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-6982/.minikube/ca.pem
	I1204 15:38:02.326831   10206 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/20045-6982/.minikube/ca.pem (1078 bytes)
	I1204 15:38:02.327024   10206 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-6982/.minikube/cert.pem, removing ...
	I1204 15:38:02.327029   10206 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-6982/.minikube/cert.pem
	I1204 15:38:02.327071   10206 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/20045-6982/.minikube/cert.pem (1123 bytes)
	I1204 15:38:02.327177   10206 exec_runner.go:144] found /Users/jenkins/minikube-integration/20045-6982/.minikube/key.pem, removing ...
	I1204 15:38:02.327181   10206 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/20045-6982/.minikube/key.pem
	I1204 15:38:02.327218   10206 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/20045-6982/.minikube/key.pem (1679 bytes)
	I1204 15:38:02.327312   10206 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-377000 san=[127.0.0.1 localhost minikube stopped-upgrade-377000]
	I1204 15:38:02.403905   10206 provision.go:177] copyRemoteCerts
	I1204 15:38:02.403969   10206 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 15:38:02.403978   10206 sshutil.go:53] new ssh client: &{IP:localhost Port:61799 SSHKeyPath:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/stopped-upgrade-377000/id_rsa Username:docker}
	I1204 15:38:02.437666   10206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1204 15:38:02.444445   10206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1204 15:38:02.451008   10206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1204 15:38:02.458132   10206 provision.go:87] duration metric: took 131.472458ms to configureAuth
	I1204 15:38:02.458140   10206 buildroot.go:189] setting minikube options for container-runtime
	I1204 15:38:02.458235   10206 config.go:182] Loaded profile config "stopped-upgrade-377000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1204 15:38:02.458293   10206 main.go:141] libmachine: Using SSH client type: native
	I1204 15:38:02.458380   10206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102902f60] 0x1029057a0 <nil>  [] 0s} localhost 61799 <nil> <nil>}
	I1204 15:38:02.458385   10206 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1204 15:38:02.523495   10206 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1204 15:38:02.523503   10206 buildroot.go:70] root file system type: tmpfs
	I1204 15:38:02.523555   10206 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1204 15:38:02.523605   10206 main.go:141] libmachine: Using SSH client type: native
	I1204 15:38:02.523706   10206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102902f60] 0x1029057a0 <nil>  [] 0s} localhost 61799 <nil> <nil>}
	I1204 15:38:02.523740   10206 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1204 15:38:02.591381   10206 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1204 15:38:02.591444   10206 main.go:141] libmachine: Using SSH client type: native
	I1204 15:38:02.591551   10206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102902f60] 0x1029057a0 <nil>  [] 0s} localhost 61799 <nil> <nil>}
	I1204 15:38:02.591563   10206 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1204 15:38:02.974752   10206 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1204 15:38:02.974766   10206 machine.go:96] duration metric: took 851.744583ms to provisionDockerMachine
	I1204 15:38:02.974775   10206 start.go:293] postStartSetup for "stopped-upgrade-377000" (driver="qemu2")
	I1204 15:38:02.974781   10206 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 15:38:02.974857   10206 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 15:38:02.974869   10206 sshutil.go:53] new ssh client: &{IP:localhost Port:61799 SSHKeyPath:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/stopped-upgrade-377000/id_rsa Username:docker}
	I1204 15:38:03.009529   10206 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 15:38:03.010839   10206 info.go:137] Remote host: Buildroot 2021.02.12
	I1204 15:38:03.010847   10206 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20045-6982/.minikube/addons for local assets ...
	I1204 15:38:03.010927   10206 filesync.go:126] Scanning /Users/jenkins/minikube-integration/20045-6982/.minikube/files for local assets ...
	I1204 15:38:03.011018   10206 filesync.go:149] local asset: /Users/jenkins/minikube-integration/20045-6982/.minikube/files/etc/ssl/certs/74952.pem -> 74952.pem in /etc/ssl/certs
	I1204 15:38:03.011123   10206 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1204 15:38:03.014156   10206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/files/etc/ssl/certs/74952.pem --> /etc/ssl/certs/74952.pem (1708 bytes)
	I1204 15:38:03.021344   10206 start.go:296] duration metric: took 46.563459ms for postStartSetup
	I1204 15:38:03.021357   10206 fix.go:56] duration metric: took 21.139505708s for fixHost
	I1204 15:38:03.021404   10206 main.go:141] libmachine: Using SSH client type: native
	I1204 15:38:03.021505   10206 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102902f60] 0x1029057a0 <nil>  [] 0s} localhost 61799 <nil> <nil>}
	I1204 15:38:03.021509   10206 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1204 15:38:03.087300   10206 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733355483.344829629
	
	I1204 15:38:03.087310   10206 fix.go:216] guest clock: 1733355483.344829629
	I1204 15:38:03.087314   10206 fix.go:229] Guest: 2024-12-04 15:38:03.344829629 -0800 PST Remote: 2024-12-04 15:38:03.021359 -0800 PST m=+21.273255293 (delta=323.470629ms)
	I1204 15:38:03.087325   10206 fix.go:200] guest clock delta is within tolerance: 323.470629ms
	I1204 15:38:03.087327   10206 start.go:83] releasing machines lock for "stopped-upgrade-377000", held for 21.205485417s
	I1204 15:38:03.087410   10206 ssh_runner.go:195] Run: cat /version.json
	I1204 15:38:03.087420   10206 sshutil.go:53] new ssh client: &{IP:localhost Port:61799 SSHKeyPath:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/stopped-upgrade-377000/id_rsa Username:docker}
	I1204 15:38:03.087410   10206 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 15:38:03.087451   10206 sshutil.go:53] new ssh client: &{IP:localhost Port:61799 SSHKeyPath:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/stopped-upgrade-377000/id_rsa Username:docker}
	W1204 15:38:03.087998   10206 sshutil.go:64] dial failure (will retry): dial tcp [::1]:61799: connect: connection refused
	I1204 15:38:03.088017   10206 retry.go:31] will retry after 257.8977ms: dial tcp [::1]:61799: connect: connection refused
	W1204 15:38:03.119309   10206 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1204 15:38:03.119364   10206 ssh_runner.go:195] Run: systemctl --version
	I1204 15:38:03.121181   10206 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1204 15:38:03.122745   10206 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1204 15:38:03.122782   10206 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1204 15:38:03.125607   10206 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1204 15:38:03.130143   10206 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1204 15:38:03.130150   10206 start.go:495] detecting cgroup driver to use...
	I1204 15:38:03.130225   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 15:38:03.137796   10206 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1204 15:38:03.141116   10206 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1204 15:38:03.143837   10206 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1204 15:38:03.143862   10206 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1204 15:38:03.146936   10206 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1204 15:38:03.150199   10206 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1204 15:38:03.153897   10206 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1204 15:38:03.157208   10206 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 15:38:03.159899   10206 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1204 15:38:03.162775   10206 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1204 15:38:03.165871   10206 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1204 15:38:03.168895   10206 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 15:38:03.171445   10206 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 15:38:03.174528   10206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:38:03.254939   10206 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1204 15:38:03.261417   10206 start.go:495] detecting cgroup driver to use...
	I1204 15:38:03.261506   10206 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1204 15:38:03.267244   10206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 15:38:03.272302   10206 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 15:38:03.281836   10206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 15:38:03.286651   10206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1204 15:38:03.291617   10206 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1204 15:38:03.353039   10206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1204 15:38:03.358033   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 15:38:03.363706   10206 ssh_runner.go:195] Run: which cri-dockerd
	I1204 15:38:03.365151   10206 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1204 15:38:03.367620   10206 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1204 15:38:03.372708   10206 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1204 15:38:03.449930   10206 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1204 15:38:03.528168   10206 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1204 15:38:03.528237   10206 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1204 15:38:03.533213   10206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:38:03.610271   10206 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1204 15:38:04.768459   10206 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.158161792s)
	I1204 15:38:04.768528   10206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1204 15:38:04.772758   10206 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1204 15:38:04.777695   10206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1204 15:38:04.782996   10206 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1204 15:38:04.866033   10206 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1204 15:38:04.938167   10206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:38:05.018856   10206 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1204 15:38:05.025207   10206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1204 15:38:05.029437   10206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:38:05.100589   10206 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1204 15:38:05.142282   10206 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1204 15:38:05.142384   10206 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1204 15:38:05.144411   10206 start.go:563] Will wait 60s for crictl version
	I1204 15:38:05.144449   10206 ssh_runner.go:195] Run: which crictl
	I1204 15:38:05.145641   10206 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 15:38:05.161978   10206 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1204 15:38:05.162061   10206 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1204 15:38:05.179898   10206 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1204 15:38:05.203292   10206 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1204 15:38:05.203369   10206 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1204 15:38:05.204766   10206 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 15:38:05.208173   10206 kubeadm.go:883] updating cluster {Name:stopped-upgrade-377000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:61834 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-377000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1204 15:38:05.208216   10206 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1204 15:38:05.208266   10206 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1204 15:38:05.222021   10206 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1204 15:38:05.222028   10206 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1204 15:38:05.222084   10206 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1204 15:38:05.225448   10206 ssh_runner.go:195] Run: which lz4
	I1204 15:38:05.226752   10206 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1204 15:38:05.228167   10206 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1204 15:38:05.228177   10206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1204 15:38:06.198538   10206 docker.go:653] duration metric: took 971.822667ms to copy over tarball
	I1204 15:38:06.198608   10206 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1204 15:38:07.370329   10206 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.171694792s)
	I1204 15:38:07.370341   10206 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1204 15:38:07.386199   10206 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1204 15:38:07.389817   10206 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1204 15:38:07.395164   10206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:38:07.474623   10206 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1204 15:38:09.078684   10206 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.604013917s)
	I1204 15:38:09.078794   10206 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1204 15:38:09.089819   10206 docker.go:689] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1204 15:38:09.089828   10206 docker.go:695] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1204 15:38:09.089833   10206 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1204 15:38:09.096418   10206 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 15:38:09.098533   10206 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1204 15:38:09.099958   10206 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1204 15:38:09.100337   10206 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 15:38:09.102029   10206 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1204 15:38:09.102128   10206 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1204 15:38:09.103441   10206 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1204 15:38:09.103705   10206 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1204 15:38:09.104760   10206 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1204 15:38:09.104953   10206 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1204 15:38:09.106038   10206 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1204 15:38:09.106044   10206 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1204 15:38:09.107028   10206 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1204 15:38:09.107369   10206 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1204 15:38:09.108166   10206 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1204 15:38:09.109027   10206 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1204 15:38:09.658616   10206 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1204 15:38:09.666630   10206 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1204 15:38:09.670763   10206 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1204 15:38:09.670797   10206 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1204 15:38:09.670854   10206 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1204 15:38:09.685703   10206 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1204 15:38:09.685734   10206 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1204 15:38:09.685714   10206 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1204 15:38:09.685769   10206 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1204 15:38:09.696704   10206 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1204 15:38:09.710948   10206 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1204 15:38:09.722387   10206 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1204 15:38:09.722444   10206 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1204 15:38:09.722498   10206 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1204 15:38:09.732708   10206 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1204 15:38:09.740151   10206 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1204 15:38:09.753261   10206 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1204 15:38:09.753282   10206 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1204 15:38:09.753352   10206 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1204 15:38:09.763596   10206 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1204 15:38:09.865484   10206 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1204 15:38:09.877867   10206 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1204 15:38:09.884235   10206 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1204 15:38:09.884255   10206 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1204 15:38:09.884319   10206 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1204 15:38:09.897968   10206 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1204 15:38:09.897990   10206 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1204 15:38:09.898056   10206 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1204 15:38:09.907892   10206 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1204 15:38:09.911927   10206 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1204 15:38:09.912086   10206 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1204 15:38:09.913724   10206 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1204 15:38:09.913740   10206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1204 15:38:09.923036   10206 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1204 15:38:09.923051   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1204 15:38:09.952351   10206 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W1204 15:38:09.963172   10206 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1204 15:38:09.963327   10206 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1204 15:38:09.974372   10206 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1204 15:38:09.974394   10206 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1204 15:38:09.974459   10206 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1204 15:38:09.986249   10206 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1204 15:38:09.986397   10206 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1204 15:38:09.988162   10206 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1204 15:38:09.988185   10206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1204 15:38:10.031602   10206 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1204 15:38:10.031626   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	W1204 15:38:10.033847   10206 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1204 15:38:10.034149   10206 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 15:38:10.078489   10206 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1204 15:38:10.078540   10206 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1204 15:38:10.078566   10206 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 15:38:10.078637   10206 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 15:38:10.092999   10206 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1204 15:38:10.093149   10206 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1204 15:38:10.094524   10206 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1204 15:38:10.094536   10206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1204 15:38:10.128264   10206 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1204 15:38:10.128279   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1204 15:38:10.379703   10206 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1204 15:38:10.379735   10206 cache_images.go:92] duration metric: took 1.289883666s to LoadCachedImages
	W1204 15:38:10.379781   10206 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I1204 15:38:10.379788   10206 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1204 15:38:10.379853   10206 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-377000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-377000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 15:38:10.379935   10206 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1204 15:38:10.393880   10206 cni.go:84] Creating CNI manager for ""
	I1204 15:38:10.393896   10206 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 15:38:10.393909   10206 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 15:38:10.393918   10206 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-377000 NodeName:stopped-upgrade-377000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 15:38:10.393992   10206 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-377000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 15:38:10.394078   10206 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1204 15:38:10.397281   10206 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 15:38:10.397320   10206 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 15:38:10.399866   10206 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1204 15:38:10.404728   10206 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 15:38:10.409699   10206 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1204 15:38:10.415103   10206 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1204 15:38:10.416397   10206 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 15:38:10.419758   10206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:38:10.506718   10206 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 15:38:10.516663   10206 certs.go:68] Setting up /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/stopped-upgrade-377000 for IP: 10.0.2.15
	I1204 15:38:10.516672   10206 certs.go:194] generating shared ca certs ...
	I1204 15:38:10.516680   10206 certs.go:226] acquiring lock for ca certs: {Name:mkc3a39b491c90031583eb49eb548c7e4c1f6091 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:38:10.516853   10206 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/20045-6982/.minikube/ca.key
	I1204 15:38:10.516893   10206 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/20045-6982/.minikube/proxy-client-ca.key
	I1204 15:38:10.516899   10206 certs.go:256] generating profile certs ...
	I1204 15:38:10.516960   10206 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/stopped-upgrade-377000/client.key
	I1204 15:38:10.516981   10206 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/stopped-upgrade-377000/apiserver.key.4b76073c
	I1204 15:38:10.516993   10206 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/stopped-upgrade-377000/apiserver.crt.4b76073c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1204 15:38:10.726470   10206 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/stopped-upgrade-377000/apiserver.crt.4b76073c ...
	I1204 15:38:10.726487   10206 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/stopped-upgrade-377000/apiserver.crt.4b76073c: {Name:mk84cbc2c89a4a537c79a32039bed9e1b6cb0cda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:38:10.726901   10206 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/stopped-upgrade-377000/apiserver.key.4b76073c ...
	I1204 15:38:10.726906   10206 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/stopped-upgrade-377000/apiserver.key.4b76073c: {Name:mk0b5c8865ca5f079bc764078e2a2d884bfbc5b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:38:10.727078   10206 certs.go:381] copying /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/stopped-upgrade-377000/apiserver.crt.4b76073c -> /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/stopped-upgrade-377000/apiserver.crt
	I1204 15:38:10.727732   10206 certs.go:385] copying /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/stopped-upgrade-377000/apiserver.key.4b76073c -> /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/stopped-upgrade-377000/apiserver.key
	I1204 15:38:10.727913   10206 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/stopped-upgrade-377000/proxy-client.key
	I1204 15:38:10.728070   10206 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/7495.pem (1338 bytes)
	W1204 15:38:10.728099   10206 certs.go:480] ignoring /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/7495_empty.pem, impossibly tiny 0 bytes
	I1204 15:38:10.728105   10206 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca-key.pem (1675 bytes)
	I1204 15:38:10.728130   10206 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem (1078 bytes)
	I1204 15:38:10.728150   10206 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem (1123 bytes)
	I1204 15:38:10.728172   10206 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/key.pem (1679 bytes)
	I1204 15:38:10.728213   10206 certs.go:484] found cert: /Users/jenkins/minikube-integration/20045-6982/.minikube/files/etc/ssl/certs/74952.pem (1708 bytes)
	I1204 15:38:10.728587   10206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 15:38:10.735833   10206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1204 15:38:10.743016   10206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 15:38:10.750481   10206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 15:38:10.757683   10206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/stopped-upgrade-377000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1204 15:38:10.764799   10206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/stopped-upgrade-377000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1204 15:38:10.771592   10206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/stopped-upgrade-377000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 15:38:10.778621   10206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/stopped-upgrade-377000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 15:38:10.786255   10206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 15:38:10.793483   10206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/7495.pem --> /usr/share/ca-certificates/7495.pem (1338 bytes)
	I1204 15:38:10.800236   10206 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/20045-6982/.minikube/files/etc/ssl/certs/74952.pem --> /usr/share/ca-certificates/74952.pem (1708 bytes)
	I1204 15:38:10.806958   10206 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 15:38:10.812452   10206 ssh_runner.go:195] Run: openssl version
	I1204 15:38:10.814357   10206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 15:38:10.817787   10206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:38:10.819121   10206 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 23:34 /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:38:10.819161   10206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 15:38:10.820847   10206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 15:38:10.823738   10206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7495.pem && ln -fs /usr/share/ca-certificates/7495.pem /etc/ssl/certs/7495.pem"
	I1204 15:38:10.827064   10206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7495.pem
	I1204 15:38:10.828648   10206 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  4 23:22 /usr/share/ca-certificates/7495.pem
	I1204 15:38:10.828680   10206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7495.pem
	I1204 15:38:10.830296   10206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7495.pem /etc/ssl/certs/51391683.0"
	I1204 15:38:10.833462   10206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/74952.pem && ln -fs /usr/share/ca-certificates/74952.pem /etc/ssl/certs/74952.pem"
	I1204 15:38:10.836256   10206 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/74952.pem
	I1204 15:38:10.837691   10206 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  4 23:22 /usr/share/ca-certificates/74952.pem
	I1204 15:38:10.837716   10206 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/74952.pem
	I1204 15:38:10.839443   10206 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/74952.pem /etc/ssl/certs/3ec20f2e.0"
	I1204 15:38:10.842762   10206 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 15:38:10.844242   10206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1204 15:38:10.846977   10206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1204 15:38:10.848938   10206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1204 15:38:10.850890   10206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1204 15:38:10.852881   10206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1204 15:38:10.854637   10206 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1204 15:38:10.856464   10206 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-377000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:61834 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-377000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1204 15:38:10.856535   10206 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1204 15:38:10.867059   10206 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 15:38:10.870966   10206 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1204 15:38:10.870976   10206 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1204 15:38:10.871007   10206 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1204 15:38:10.873955   10206 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1204 15:38:10.874265   10206 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-377000" does not appear in /Users/jenkins/minikube-integration/20045-6982/kubeconfig
	I1204 15:38:10.874363   10206 kubeconfig.go:62] /Users/jenkins/minikube-integration/20045-6982/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-377000" cluster setting kubeconfig missing "stopped-upgrade-377000" context setting]
	I1204 15:38:10.874578   10206 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-6982/kubeconfig: {Name:mk101d59bd39dad79cc42c692d70ed55e90c94da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:38:10.875024   10206 kapi.go:59] client config for stopped-upgrade-377000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/stopped-upgrade-377000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/stopped-upgrade-377000/client.key", CAFile:"/Users/jenkins/minikube-integration/20045-6982/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10435f6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1204 15:38:10.875373   10206 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1204 15:38:10.878139   10206 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-377000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1204 15:38:10.878144   10206 kubeadm.go:1160] stopping kube-system containers ...
	I1204 15:38:10.878191   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1204 15:38:10.888489   10206 docker.go:483] Stopping containers: [4f1790594676 63473edefa8f 1d1ae4543cd6 9b5d6b3a7511 b0ad1b935d01 7e96315d0637 93b18643529f 931f0e7873ab]
	I1204 15:38:10.888567   10206 ssh_runner.go:195] Run: docker stop 4f1790594676 63473edefa8f 1d1ae4543cd6 9b5d6b3a7511 b0ad1b935d01 7e96315d0637 93b18643529f 931f0e7873ab
	I1204 15:38:10.899263   10206 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1204 15:38:10.905195   10206 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 15:38:10.907895   10206 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 15:38:10.907901   10206 kubeadm.go:157] found existing configuration files:
	
	I1204 15:38:10.907929   10206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61834 /etc/kubernetes/admin.conf
	I1204 15:38:10.910675   10206 kubeadm.go:163] "https://control-plane.minikube.internal:61834" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:61834 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 15:38:10.910710   10206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 15:38:10.913756   10206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61834 /etc/kubernetes/kubelet.conf
	I1204 15:38:10.916322   10206 kubeadm.go:163] "https://control-plane.minikube.internal:61834" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:61834 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 15:38:10.916354   10206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 15:38:10.918961   10206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61834 /etc/kubernetes/controller-manager.conf
	I1204 15:38:10.921958   10206 kubeadm.go:163] "https://control-plane.minikube.internal:61834" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:61834 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 15:38:10.921988   10206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 15:38:10.924813   10206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61834 /etc/kubernetes/scheduler.conf
	I1204 15:38:10.927203   10206 kubeadm.go:163] "https://control-plane.minikube.internal:61834" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:61834 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 15:38:10.927232   10206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 15:38:10.930193   10206 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 15:38:10.933341   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 15:38:10.955661   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 15:38:11.743883   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1204 15:38:11.877024   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1204 15:38:11.897710   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1204 15:38:11.924404   10206 api_server.go:52] waiting for apiserver process to appear ...
	I1204 15:38:11.924502   10206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 15:38:12.426570   10206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 15:38:12.926591   10206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 15:38:12.930746   10206 api_server.go:72] duration metric: took 1.006335417s to wait for apiserver process to appear ...
	I1204 15:38:12.930760   10206 api_server.go:88] waiting for apiserver healthz status ...
	I1204 15:38:12.930776   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:38:17.932872   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:38:17.932889   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:38:22.933178   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:38:22.933228   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:38:27.933685   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:38:27.933709   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:38:32.934191   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:38:32.934243   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:38:37.935055   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:38:37.935101   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:38:42.936012   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:38:42.936038   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:38:47.937523   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:38:47.937573   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:38:52.939106   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:38:52.939186   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:38:57.941427   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:38:57.941468   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:39:02.943781   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:39:02.943800   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:39:07.944086   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:39:07.944131   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:39:12.946468   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:39:12.946606   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:39:12.957822   10206 logs.go:282] 2 containers: [bc235f7c7828 1d1ae4543cd6]
	I1204 15:39:12.957897   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:39:12.968095   10206 logs.go:282] 2 containers: [fc48923b85fc 4f1790594676]
	I1204 15:39:12.968171   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:39:12.978698   10206 logs.go:282] 1 containers: [467f47d9f689]
	I1204 15:39:12.978773   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:39:12.995946   10206 logs.go:282] 2 containers: [1f92512b0d23 b0ad1b935d01]
	I1204 15:39:12.996022   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:39:13.006533   10206 logs.go:282] 1 containers: [e290e266aab8]
	I1204 15:39:13.006612   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:39:13.017101   10206 logs.go:282] 2 containers: [4469186baff0 63473edefa8f]
	I1204 15:39:13.017178   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:39:13.027229   10206 logs.go:282] 0 containers: []
	W1204 15:39:13.027240   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:39:13.027303   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:39:13.038194   10206 logs.go:282] 2 containers: [4bf3c6abead9 4a54859eb1e5]
	I1204 15:39:13.038214   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:39:13.038220   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:39:13.062220   10206 logs.go:123] Gathering logs for kube-apiserver [1d1ae4543cd6] ...
	I1204 15:39:13.062227   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d1ae4543cd6"
	I1204 15:39:13.104561   10206 logs.go:123] Gathering logs for etcd [fc48923b85fc] ...
	I1204 15:39:13.104578   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc48923b85fc"
	I1204 15:39:13.119469   10206 logs.go:123] Gathering logs for etcd [4f1790594676] ...
	I1204 15:39:13.119482   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f1790594676"
	I1204 15:39:13.137942   10206 logs.go:123] Gathering logs for storage-provisioner [4bf3c6abead9] ...
	I1204 15:39:13.137954   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bf3c6abead9"
	I1204 15:39:13.149289   10206 logs.go:123] Gathering logs for storage-provisioner [4a54859eb1e5] ...
	I1204 15:39:13.149299   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a54859eb1e5"
	I1204 15:39:13.160461   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:39:13.160474   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:39:13.172856   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:39:13.172866   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:39:13.212457   10206 logs.go:123] Gathering logs for coredns [467f47d9f689] ...
	I1204 15:39:13.212472   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467f47d9f689"
	I1204 15:39:13.226665   10206 logs.go:123] Gathering logs for kube-scheduler [b0ad1b935d01] ...
	I1204 15:39:13.226678   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0ad1b935d01"
	I1204 15:39:13.243888   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:39:13.243898   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:39:13.248428   10206 logs.go:123] Gathering logs for kube-scheduler [1f92512b0d23] ...
	I1204 15:39:13.248434   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f92512b0d23"
	I1204 15:39:13.260493   10206 logs.go:123] Gathering logs for kube-controller-manager [63473edefa8f] ...
	I1204 15:39:13.260505   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63473edefa8f"
	I1204 15:39:13.273014   10206 logs.go:123] Gathering logs for kube-controller-manager [4469186baff0] ...
	I1204 15:39:13.273025   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4469186baff0"
	I1204 15:39:13.290465   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:39:13.290475   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:39:13.379546   10206 logs.go:123] Gathering logs for kube-apiserver [bc235f7c7828] ...
	I1204 15:39:13.379560   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc235f7c7828"
	I1204 15:39:13.393814   10206 logs.go:123] Gathering logs for kube-proxy [e290e266aab8] ...
	I1204 15:39:13.393824   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e290e266aab8"
	I1204 15:39:15.907759   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:39:20.910122   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:39:20.910308   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:39:20.926432   10206 logs.go:282] 2 containers: [bc235f7c7828 1d1ae4543cd6]
	I1204 15:39:20.926532   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:39:20.939165   10206 logs.go:282] 2 containers: [fc48923b85fc 4f1790594676]
	I1204 15:39:20.939252   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:39:20.950183   10206 logs.go:282] 1 containers: [467f47d9f689]
	I1204 15:39:20.950255   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:39:20.961160   10206 logs.go:282] 2 containers: [1f92512b0d23 b0ad1b935d01]
	I1204 15:39:20.961239   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:39:20.971968   10206 logs.go:282] 1 containers: [e290e266aab8]
	I1204 15:39:20.972051   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:39:20.983312   10206 logs.go:282] 2 containers: [4469186baff0 63473edefa8f]
	I1204 15:39:20.983395   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:39:20.994716   10206 logs.go:282] 0 containers: []
	W1204 15:39:20.994726   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:39:20.994801   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:39:21.005186   10206 logs.go:282] 2 containers: [4bf3c6abead9 4a54859eb1e5]
	I1204 15:39:21.005205   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:39:21.005211   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:39:21.042226   10206 logs.go:123] Gathering logs for kube-apiserver [1d1ae4543cd6] ...
	I1204 15:39:21.042237   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d1ae4543cd6"
	I1204 15:39:21.080701   10206 logs.go:123] Gathering logs for etcd [4f1790594676] ...
	I1204 15:39:21.080715   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f1790594676"
	I1204 15:39:21.096028   10206 logs.go:123] Gathering logs for kube-proxy [e290e266aab8] ...
	I1204 15:39:21.096040   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e290e266aab8"
	I1204 15:39:21.107958   10206 logs.go:123] Gathering logs for kube-apiserver [bc235f7c7828] ...
	I1204 15:39:21.107969   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc235f7c7828"
	I1204 15:39:21.122372   10206 logs.go:123] Gathering logs for etcd [fc48923b85fc] ...
	I1204 15:39:21.122382   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc48923b85fc"
	I1204 15:39:21.136589   10206 logs.go:123] Gathering logs for kube-scheduler [1f92512b0d23] ...
	I1204 15:39:21.136599   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f92512b0d23"
	I1204 15:39:21.148319   10206 logs.go:123] Gathering logs for kube-controller-manager [4469186baff0] ...
	I1204 15:39:21.148330   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4469186baff0"
	I1204 15:39:21.166149   10206 logs.go:123] Gathering logs for storage-provisioner [4bf3c6abead9] ...
	I1204 15:39:21.166160   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bf3c6abead9"
	I1204 15:39:21.177453   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:39:21.177463   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:39:21.201477   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:39:21.201487   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:39:21.205337   10206 logs.go:123] Gathering logs for coredns [467f47d9f689] ...
	I1204 15:39:21.205343   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467f47d9f689"
	I1204 15:39:21.216802   10206 logs.go:123] Gathering logs for kube-scheduler [b0ad1b935d01] ...
	I1204 15:39:21.216813   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0ad1b935d01"
	I1204 15:39:21.231595   10206 logs.go:123] Gathering logs for storage-provisioner [4a54859eb1e5] ...
	I1204 15:39:21.231606   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a54859eb1e5"
	I1204 15:39:21.242630   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:39:21.242640   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:39:21.254957   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:39:21.254971   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:39:21.294164   10206 logs.go:123] Gathering logs for kube-controller-manager [63473edefa8f] ...
	I1204 15:39:21.294175   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63473edefa8f"
	I1204 15:39:23.809518   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:39:28.811905   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:39:28.812079   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:39:28.823534   10206 logs.go:282] 2 containers: [bc235f7c7828 1d1ae4543cd6]
	I1204 15:39:28.823613   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:39:28.834203   10206 logs.go:282] 2 containers: [fc48923b85fc 4f1790594676]
	I1204 15:39:28.834278   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:39:28.848874   10206 logs.go:282] 1 containers: [467f47d9f689]
	I1204 15:39:28.848954   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:39:28.859197   10206 logs.go:282] 2 containers: [1f92512b0d23 b0ad1b935d01]
	I1204 15:39:28.859277   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:39:28.869625   10206 logs.go:282] 1 containers: [e290e266aab8]
	I1204 15:39:28.869696   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:39:28.880352   10206 logs.go:282] 2 containers: [4469186baff0 63473edefa8f]
	I1204 15:39:28.880419   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:39:28.895338   10206 logs.go:282] 0 containers: []
	W1204 15:39:28.895352   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:39:28.895425   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:39:28.907916   10206 logs.go:282] 2 containers: [4bf3c6abead9 4a54859eb1e5]
	I1204 15:39:28.907938   10206 logs.go:123] Gathering logs for kube-apiserver [bc235f7c7828] ...
	I1204 15:39:28.907943   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc235f7c7828"
	I1204 15:39:28.921889   10206 logs.go:123] Gathering logs for etcd [fc48923b85fc] ...
	I1204 15:39:28.921901   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc48923b85fc"
	I1204 15:39:28.935415   10206 logs.go:123] Gathering logs for etcd [4f1790594676] ...
	I1204 15:39:28.935426   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f1790594676"
	I1204 15:39:28.949703   10206 logs.go:123] Gathering logs for kube-scheduler [1f92512b0d23] ...
	I1204 15:39:28.949713   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f92512b0d23"
	I1204 15:39:28.961404   10206 logs.go:123] Gathering logs for kube-controller-manager [4469186baff0] ...
	I1204 15:39:28.961415   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4469186baff0"
	I1204 15:39:28.978334   10206 logs.go:123] Gathering logs for kube-controller-manager [63473edefa8f] ...
	I1204 15:39:28.978344   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63473edefa8f"
	I1204 15:39:28.993521   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:39:28.993533   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:39:29.032810   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:39:29.032821   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:39:29.037480   10206 logs.go:123] Gathering logs for kube-proxy [e290e266aab8] ...
	I1204 15:39:29.037489   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e290e266aab8"
	I1204 15:39:29.051951   10206 logs.go:123] Gathering logs for storage-provisioner [4bf3c6abead9] ...
	I1204 15:39:29.051961   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bf3c6abead9"
	I1204 15:39:29.064183   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:39:29.064194   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:39:29.076357   10206 logs.go:123] Gathering logs for kube-apiserver [1d1ae4543cd6] ...
	I1204 15:39:29.076370   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d1ae4543cd6"
	I1204 15:39:29.119194   10206 logs.go:123] Gathering logs for coredns [467f47d9f689] ...
	I1204 15:39:29.119207   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467f47d9f689"
	I1204 15:39:29.130964   10206 logs.go:123] Gathering logs for kube-scheduler [b0ad1b935d01] ...
	I1204 15:39:29.130976   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0ad1b935d01"
	I1204 15:39:29.150521   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:39:29.150532   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:39:29.185584   10206 logs.go:123] Gathering logs for storage-provisioner [4a54859eb1e5] ...
	I1204 15:39:29.185596   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a54859eb1e5"
	I1204 15:39:29.197369   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:39:29.197382   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:39:31.724167   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:39:36.726618   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:39:36.726839   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:39:36.746903   10206 logs.go:282] 2 containers: [bc235f7c7828 1d1ae4543cd6]
	I1204 15:39:36.747013   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:39:36.760964   10206 logs.go:282] 2 containers: [fc48923b85fc 4f1790594676]
	I1204 15:39:36.761049   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:39:36.772469   10206 logs.go:282] 1 containers: [467f47d9f689]
	I1204 15:39:36.772560   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:39:36.782811   10206 logs.go:282] 2 containers: [1f92512b0d23 b0ad1b935d01]
	I1204 15:39:36.782911   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:39:36.793334   10206 logs.go:282] 1 containers: [e290e266aab8]
	I1204 15:39:36.793401   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:39:36.804992   10206 logs.go:282] 2 containers: [4469186baff0 63473edefa8f]
	I1204 15:39:36.805067   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:39:36.815849   10206 logs.go:282] 0 containers: []
	W1204 15:39:36.815863   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:39:36.815928   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:39:36.831130   10206 logs.go:282] 2 containers: [4bf3c6abead9 4a54859eb1e5]
	I1204 15:39:36.831150   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:39:36.831155   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:39:36.845862   10206 logs.go:123] Gathering logs for etcd [4f1790594676] ...
	I1204 15:39:36.845873   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f1790594676"
	I1204 15:39:36.866037   10206 logs.go:123] Gathering logs for kube-controller-manager [63473edefa8f] ...
	I1204 15:39:36.866047   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63473edefa8f"
	I1204 15:39:36.890945   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:39:36.890956   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:39:36.916905   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:39:36.916914   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:39:36.956697   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:39:36.956709   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:39:36.997177   10206 logs.go:123] Gathering logs for kube-apiserver [bc235f7c7828] ...
	I1204 15:39:36.997192   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc235f7c7828"
	I1204 15:39:37.011335   10206 logs.go:123] Gathering logs for coredns [467f47d9f689] ...
	I1204 15:39:37.011345   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467f47d9f689"
	I1204 15:39:37.026486   10206 logs.go:123] Gathering logs for kube-controller-manager [4469186baff0] ...
	I1204 15:39:37.026497   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4469186baff0"
	I1204 15:39:37.048729   10206 logs.go:123] Gathering logs for kube-apiserver [1d1ae4543cd6] ...
	I1204 15:39:37.048745   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d1ae4543cd6"
	I1204 15:39:37.087842   10206 logs.go:123] Gathering logs for storage-provisioner [4a54859eb1e5] ...
	I1204 15:39:37.087854   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a54859eb1e5"
	I1204 15:39:37.099142   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:39:37.099156   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:39:37.111866   10206 logs.go:123] Gathering logs for etcd [fc48923b85fc] ...
	I1204 15:39:37.111878   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc48923b85fc"
	I1204 15:39:37.126949   10206 logs.go:123] Gathering logs for kube-scheduler [1f92512b0d23] ...
	I1204 15:39:37.126991   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f92512b0d23"
	I1204 15:39:37.140836   10206 logs.go:123] Gathering logs for kube-scheduler [b0ad1b935d01] ...
	I1204 15:39:37.140844   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0ad1b935d01"
	I1204 15:39:37.156020   10206 logs.go:123] Gathering logs for kube-proxy [e290e266aab8] ...
	I1204 15:39:37.156036   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e290e266aab8"
	I1204 15:39:37.167543   10206 logs.go:123] Gathering logs for storage-provisioner [4bf3c6abead9] ...
	I1204 15:39:37.167553   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bf3c6abead9"
	I1204 15:39:39.680799   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:39:44.683156   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:39:44.683406   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:39:44.717417   10206 logs.go:282] 2 containers: [bc235f7c7828 1d1ae4543cd6]
	I1204 15:39:44.717532   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:39:44.733342   10206 logs.go:282] 2 containers: [fc48923b85fc 4f1790594676]
	I1204 15:39:44.733430   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:39:44.745288   10206 logs.go:282] 1 containers: [467f47d9f689]
	I1204 15:39:44.745371   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:39:44.756495   10206 logs.go:282] 2 containers: [1f92512b0d23 b0ad1b935d01]
	I1204 15:39:44.756577   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:39:44.766907   10206 logs.go:282] 1 containers: [e290e266aab8]
	I1204 15:39:44.766985   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:39:44.782194   10206 logs.go:282] 2 containers: [4469186baff0 63473edefa8f]
	I1204 15:39:44.782274   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:39:44.792720   10206 logs.go:282] 0 containers: []
	W1204 15:39:44.792732   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:39:44.792802   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:39:44.805885   10206 logs.go:282] 2 containers: [4bf3c6abead9 4a54859eb1e5]
	I1204 15:39:44.805903   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:39:44.805909   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:39:44.841033   10206 logs.go:123] Gathering logs for etcd [fc48923b85fc] ...
	I1204 15:39:44.841046   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc48923b85fc"
	I1204 15:39:44.860926   10206 logs.go:123] Gathering logs for storage-provisioner [4bf3c6abead9] ...
	I1204 15:39:44.860939   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bf3c6abead9"
	I1204 15:39:44.872657   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:39:44.872669   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:39:44.897654   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:39:44.897664   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:39:44.901779   10206 logs.go:123] Gathering logs for kube-apiserver [bc235f7c7828] ...
	I1204 15:39:44.901786   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc235f7c7828"
	I1204 15:39:44.919543   10206 logs.go:123] Gathering logs for kube-scheduler [1f92512b0d23] ...
	I1204 15:39:44.919557   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f92512b0d23"
	I1204 15:39:44.935044   10206 logs.go:123] Gathering logs for kube-controller-manager [63473edefa8f] ...
	I1204 15:39:44.935057   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63473edefa8f"
	I1204 15:39:44.947749   10206 logs.go:123] Gathering logs for kube-apiserver [1d1ae4543cd6] ...
	I1204 15:39:44.947761   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d1ae4543cd6"
	I1204 15:39:44.985918   10206 logs.go:123] Gathering logs for etcd [4f1790594676] ...
	I1204 15:39:44.985931   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f1790594676"
	I1204 15:39:45.001073   10206 logs.go:123] Gathering logs for storage-provisioner [4a54859eb1e5] ...
	I1204 15:39:45.001083   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a54859eb1e5"
	I1204 15:39:45.012770   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:39:45.012781   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:39:45.025513   10206 logs.go:123] Gathering logs for kube-controller-manager [4469186baff0] ...
	I1204 15:39:45.025523   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4469186baff0"
	I1204 15:39:45.043229   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:39:45.043242   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:39:45.080926   10206 logs.go:123] Gathering logs for coredns [467f47d9f689] ...
	I1204 15:39:45.080936   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467f47d9f689"
	I1204 15:39:45.092246   10206 logs.go:123] Gathering logs for kube-scheduler [b0ad1b935d01] ...
	I1204 15:39:45.092257   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0ad1b935d01"
	I1204 15:39:45.107800   10206 logs.go:123] Gathering logs for kube-proxy [e290e266aab8] ...
	I1204 15:39:45.107811   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e290e266aab8"
	I1204 15:39:47.621948   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:39:52.624374   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:39:52.624545   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:39:52.636847   10206 logs.go:282] 2 containers: [bc235f7c7828 1d1ae4543cd6]
	I1204 15:39:52.636934   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:39:52.648061   10206 logs.go:282] 2 containers: [fc48923b85fc 4f1790594676]
	I1204 15:39:52.648138   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:39:52.658851   10206 logs.go:282] 1 containers: [467f47d9f689]
	I1204 15:39:52.658932   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:39:52.673000   10206 logs.go:282] 2 containers: [1f92512b0d23 b0ad1b935d01]
	I1204 15:39:52.673080   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:39:52.683192   10206 logs.go:282] 1 containers: [e290e266aab8]
	I1204 15:39:52.683274   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:39:52.700660   10206 logs.go:282] 2 containers: [4469186baff0 63473edefa8f]
	I1204 15:39:52.700733   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:39:52.713316   10206 logs.go:282] 0 containers: []
	W1204 15:39:52.713327   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:39:52.713401   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:39:52.723617   10206 logs.go:282] 2 containers: [4bf3c6abead9 4a54859eb1e5]
	I1204 15:39:52.723643   10206 logs.go:123] Gathering logs for etcd [4f1790594676] ...
	I1204 15:39:52.723655   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f1790594676"
	I1204 15:39:52.738330   10206 logs.go:123] Gathering logs for coredns [467f47d9f689] ...
	I1204 15:39:52.738344   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467f47d9f689"
	I1204 15:39:52.751263   10206 logs.go:123] Gathering logs for kube-scheduler [b0ad1b935d01] ...
	I1204 15:39:52.751276   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0ad1b935d01"
	I1204 15:39:52.771471   10206 logs.go:123] Gathering logs for kube-controller-manager [4469186baff0] ...
	I1204 15:39:52.771484   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4469186baff0"
	I1204 15:39:52.788985   10206 logs.go:123] Gathering logs for kube-controller-manager [63473edefa8f] ...
	I1204 15:39:52.788995   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63473edefa8f"
	I1204 15:39:52.802441   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:39:52.802456   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:39:52.839973   10206 logs.go:123] Gathering logs for kube-scheduler [1f92512b0d23] ...
	I1204 15:39:52.839987   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f92512b0d23"
	I1204 15:39:52.851565   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:39:52.851576   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:39:52.888837   10206 logs.go:123] Gathering logs for kube-proxy [e290e266aab8] ...
	I1204 15:39:52.888848   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e290e266aab8"
	I1204 15:39:52.900697   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:39:52.900707   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:39:52.924250   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:39:52.924260   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:39:52.929349   10206 logs.go:123] Gathering logs for kube-apiserver [1d1ae4543cd6] ...
	I1204 15:39:52.929357   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d1ae4543cd6"
	I1204 15:39:52.972535   10206 logs.go:123] Gathering logs for etcd [fc48923b85fc] ...
	I1204 15:39:52.972545   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc48923b85fc"
	I1204 15:39:52.986453   10206 logs.go:123] Gathering logs for storage-provisioner [4bf3c6abead9] ...
	I1204 15:39:52.986466   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bf3c6abead9"
	I1204 15:39:52.998801   10206 logs.go:123] Gathering logs for storage-provisioner [4a54859eb1e5] ...
	I1204 15:39:52.998813   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a54859eb1e5"
	I1204 15:39:53.009826   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:39:53.009837   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:39:53.022167   10206 logs.go:123] Gathering logs for kube-apiserver [bc235f7c7828] ...
	I1204 15:39:53.022179   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc235f7c7828"
	I1204 15:39:55.538830   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:40:00.541284   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:40:00.541547   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:40:00.565963   10206 logs.go:282] 2 containers: [bc235f7c7828 1d1ae4543cd6]
	I1204 15:40:00.566095   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:40:00.582174   10206 logs.go:282] 2 containers: [fc48923b85fc 4f1790594676]
	I1204 15:40:00.582276   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:40:00.595008   10206 logs.go:282] 1 containers: [467f47d9f689]
	I1204 15:40:00.595083   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:40:00.606117   10206 logs.go:282] 2 containers: [1f92512b0d23 b0ad1b935d01]
	I1204 15:40:00.606201   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:40:00.616277   10206 logs.go:282] 1 containers: [e290e266aab8]
	I1204 15:40:00.616352   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:40:00.627122   10206 logs.go:282] 2 containers: [4469186baff0 63473edefa8f]
	I1204 15:40:00.627204   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:40:00.637742   10206 logs.go:282] 0 containers: []
	W1204 15:40:00.637755   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:40:00.637821   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:40:00.648926   10206 logs.go:282] 2 containers: [4bf3c6abead9 4a54859eb1e5]
	I1204 15:40:00.648944   10206 logs.go:123] Gathering logs for etcd [4f1790594676] ...
	I1204 15:40:00.648950   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f1790594676"
	I1204 15:40:00.663760   10206 logs.go:123] Gathering logs for kube-proxy [e290e266aab8] ...
	I1204 15:40:00.663774   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e290e266aab8"
	I1204 15:40:00.676074   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:40:00.676085   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:40:00.680857   10206 logs.go:123] Gathering logs for kube-apiserver [1d1ae4543cd6] ...
	I1204 15:40:00.680866   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d1ae4543cd6"
	I1204 15:40:00.718318   10206 logs.go:123] Gathering logs for kube-controller-manager [63473edefa8f] ...
	I1204 15:40:00.718330   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63473edefa8f"
	I1204 15:40:00.739084   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:40:00.739096   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:40:00.751307   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:40:00.751317   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:40:00.791121   10206 logs.go:123] Gathering logs for kube-scheduler [1f92512b0d23] ...
	I1204 15:40:00.791131   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f92512b0d23"
	I1204 15:40:00.802764   10206 logs.go:123] Gathering logs for coredns [467f47d9f689] ...
	I1204 15:40:00.802773   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467f47d9f689"
	I1204 15:40:00.813862   10206 logs.go:123] Gathering logs for storage-provisioner [4bf3c6abead9] ...
	I1204 15:40:00.813872   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bf3c6abead9"
	I1204 15:40:00.825626   10206 logs.go:123] Gathering logs for etcd [fc48923b85fc] ...
	I1204 15:40:00.825639   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc48923b85fc"
	I1204 15:40:00.839573   10206 logs.go:123] Gathering logs for kube-scheduler [b0ad1b935d01] ...
	I1204 15:40:00.839584   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0ad1b935d01"
	I1204 15:40:00.854781   10206 logs.go:123] Gathering logs for kube-controller-manager [4469186baff0] ...
	I1204 15:40:00.854791   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4469186baff0"
	I1204 15:40:00.871924   10206 logs.go:123] Gathering logs for storage-provisioner [4a54859eb1e5] ...
	I1204 15:40:00.871934   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a54859eb1e5"
	I1204 15:40:00.886356   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:40:00.886366   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:40:00.910877   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:40:00.910887   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:40:00.946768   10206 logs.go:123] Gathering logs for kube-apiserver [bc235f7c7828] ...
	I1204 15:40:00.946785   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc235f7c7828"
	I1204 15:40:03.462719   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:40:08.465166   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:40:08.465548   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:40:08.496457   10206 logs.go:282] 2 containers: [bc235f7c7828 1d1ae4543cd6]
	I1204 15:40:08.496599   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:40:08.514955   10206 logs.go:282] 2 containers: [fc48923b85fc 4f1790594676]
	I1204 15:40:08.515064   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:40:08.528919   10206 logs.go:282] 1 containers: [467f47d9f689]
	I1204 15:40:08.529004   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:40:08.541262   10206 logs.go:282] 2 containers: [1f92512b0d23 b0ad1b935d01]
	I1204 15:40:08.541370   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:40:08.551749   10206 logs.go:282] 1 containers: [e290e266aab8]
	I1204 15:40:08.551823   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:40:08.575004   10206 logs.go:282] 2 containers: [4469186baff0 63473edefa8f]
	I1204 15:40:08.575083   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:40:08.585601   10206 logs.go:282] 0 containers: []
	W1204 15:40:08.585612   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:40:08.585672   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:40:08.596468   10206 logs.go:282] 2 containers: [4bf3c6abead9 4a54859eb1e5]
	I1204 15:40:08.596487   10206 logs.go:123] Gathering logs for kube-apiserver [bc235f7c7828] ...
	I1204 15:40:08.596492   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc235f7c7828"
	I1204 15:40:08.610780   10206 logs.go:123] Gathering logs for kube-controller-manager [63473edefa8f] ...
	I1204 15:40:08.610793   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63473edefa8f"
	I1204 15:40:08.627049   10206 logs.go:123] Gathering logs for storage-provisioner [4a54859eb1e5] ...
	I1204 15:40:08.627062   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a54859eb1e5"
	I1204 15:40:08.638681   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:40:08.638692   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:40:08.650893   10206 logs.go:123] Gathering logs for kube-scheduler [1f92512b0d23] ...
	I1204 15:40:08.650907   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f92512b0d23"
	I1204 15:40:08.662722   10206 logs.go:123] Gathering logs for kube-scheduler [b0ad1b935d01] ...
	I1204 15:40:08.662732   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0ad1b935d01"
	I1204 15:40:08.678207   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:40:08.678217   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:40:08.702861   10206 logs.go:123] Gathering logs for storage-provisioner [4bf3c6abead9] ...
	I1204 15:40:08.702868   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bf3c6abead9"
	I1204 15:40:08.726088   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:40:08.726101   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:40:08.760789   10206 logs.go:123] Gathering logs for kube-apiserver [1d1ae4543cd6] ...
	I1204 15:40:08.760803   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d1ae4543cd6"
	I1204 15:40:08.798173   10206 logs.go:123] Gathering logs for coredns [467f47d9f689] ...
	I1204 15:40:08.798184   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467f47d9f689"
	I1204 15:40:08.809252   10206 logs.go:123] Gathering logs for etcd [4f1790594676] ...
	I1204 15:40:08.809265   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f1790594676"
	I1204 15:40:08.825480   10206 logs.go:123] Gathering logs for kube-proxy [e290e266aab8] ...
	I1204 15:40:08.825491   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e290e266aab8"
	I1204 15:40:08.837823   10206 logs.go:123] Gathering logs for kube-controller-manager [4469186baff0] ...
	I1204 15:40:08.837836   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4469186baff0"
	I1204 15:40:08.856047   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:40:08.856058   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:40:08.893515   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:40:08.893526   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:40:08.897407   10206 logs.go:123] Gathering logs for etcd [fc48923b85fc] ...
	I1204 15:40:08.897413   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc48923b85fc"
	I1204 15:40:11.413758   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:40:16.416065   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:40:16.416172   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:40:16.428721   10206 logs.go:282] 2 containers: [bc235f7c7828 1d1ae4543cd6]
	I1204 15:40:16.428806   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:40:16.439292   10206 logs.go:282] 2 containers: [fc48923b85fc 4f1790594676]
	I1204 15:40:16.439370   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:40:16.449554   10206 logs.go:282] 1 containers: [467f47d9f689]
	I1204 15:40:16.449633   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:40:16.459909   10206 logs.go:282] 2 containers: [1f92512b0d23 b0ad1b935d01]
	I1204 15:40:16.459984   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:40:16.470092   10206 logs.go:282] 1 containers: [e290e266aab8]
	I1204 15:40:16.470173   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:40:16.480922   10206 logs.go:282] 2 containers: [4469186baff0 63473edefa8f]
	I1204 15:40:16.481000   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:40:16.496130   10206 logs.go:282] 0 containers: []
	W1204 15:40:16.496140   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:40:16.496209   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:40:16.506437   10206 logs.go:282] 2 containers: [4bf3c6abead9 4a54859eb1e5]
	I1204 15:40:16.506455   10206 logs.go:123] Gathering logs for coredns [467f47d9f689] ...
	I1204 15:40:16.506459   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467f47d9f689"
	I1204 15:40:16.517611   10206 logs.go:123] Gathering logs for kube-proxy [e290e266aab8] ...
	I1204 15:40:16.517622   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e290e266aab8"
	I1204 15:40:16.534633   10206 logs.go:123] Gathering logs for kube-apiserver [bc235f7c7828] ...
	I1204 15:40:16.534646   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc235f7c7828"
	I1204 15:40:16.548947   10206 logs.go:123] Gathering logs for kube-apiserver [1d1ae4543cd6] ...
	I1204 15:40:16.548957   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d1ae4543cd6"
	I1204 15:40:16.586084   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:40:16.586095   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:40:16.590214   10206 logs.go:123] Gathering logs for kube-scheduler [1f92512b0d23] ...
	I1204 15:40:16.590222   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f92512b0d23"
	I1204 15:40:16.601951   10206 logs.go:123] Gathering logs for kube-scheduler [b0ad1b935d01] ...
	I1204 15:40:16.601961   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0ad1b935d01"
	I1204 15:40:16.617763   10206 logs.go:123] Gathering logs for kube-controller-manager [63473edefa8f] ...
	I1204 15:40:16.617774   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63473edefa8f"
	I1204 15:40:16.631266   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:40:16.631277   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:40:16.656193   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:40:16.656200   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:40:16.668121   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:40:16.668131   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:40:16.707274   10206 logs.go:123] Gathering logs for etcd [4f1790594676] ...
	I1204 15:40:16.707285   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f1790594676"
	I1204 15:40:16.722248   10206 logs.go:123] Gathering logs for kube-controller-manager [4469186baff0] ...
	I1204 15:40:16.722259   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4469186baff0"
	I1204 15:40:16.739247   10206 logs.go:123] Gathering logs for storage-provisioner [4bf3c6abead9] ...
	I1204 15:40:16.739260   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bf3c6abead9"
	I1204 15:40:16.750608   10206 logs.go:123] Gathering logs for storage-provisioner [4a54859eb1e5] ...
	I1204 15:40:16.750618   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a54859eb1e5"
	I1204 15:40:16.761841   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:40:16.761852   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:40:16.797813   10206 logs.go:123] Gathering logs for etcd [fc48923b85fc] ...
	I1204 15:40:16.797824   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc48923b85fc"
	I1204 15:40:19.313748   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:40:24.315479   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:40:24.315649   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:40:24.332046   10206 logs.go:282] 2 containers: [bc235f7c7828 1d1ae4543cd6]
	I1204 15:40:24.332148   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:40:24.346645   10206 logs.go:282] 2 containers: [fc48923b85fc 4f1790594676]
	I1204 15:40:24.346722   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:40:24.357300   10206 logs.go:282] 1 containers: [467f47d9f689]
	I1204 15:40:24.357384   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:40:24.367715   10206 logs.go:282] 2 containers: [1f92512b0d23 b0ad1b935d01]
	I1204 15:40:24.367799   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:40:24.378308   10206 logs.go:282] 1 containers: [e290e266aab8]
	I1204 15:40:24.378384   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:40:24.388970   10206 logs.go:282] 2 containers: [4469186baff0 63473edefa8f]
	I1204 15:40:24.389050   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:40:24.399912   10206 logs.go:282] 0 containers: []
	W1204 15:40:24.399923   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:40:24.399992   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:40:24.410658   10206 logs.go:282] 2 containers: [4bf3c6abead9 4a54859eb1e5]
	I1204 15:40:24.410678   10206 logs.go:123] Gathering logs for coredns [467f47d9f689] ...
	I1204 15:40:24.410683   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467f47d9f689"
	I1204 15:40:24.422482   10206 logs.go:123] Gathering logs for kube-scheduler [1f92512b0d23] ...
	I1204 15:40:24.422493   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f92512b0d23"
	I1204 15:40:24.434317   10206 logs.go:123] Gathering logs for kube-scheduler [b0ad1b935d01] ...
	I1204 15:40:24.434328   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0ad1b935d01"
	I1204 15:40:24.449407   10206 logs.go:123] Gathering logs for kube-proxy [e290e266aab8] ...
	I1204 15:40:24.449418   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e290e266aab8"
	I1204 15:40:24.460770   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:40:24.460779   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:40:24.497732   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:40:24.497742   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:40:24.532685   10206 logs.go:123] Gathering logs for etcd [fc48923b85fc] ...
	I1204 15:40:24.532695   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc48923b85fc"
	I1204 15:40:24.546916   10206 logs.go:123] Gathering logs for kube-controller-manager [4469186baff0] ...
	I1204 15:40:24.546928   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4469186baff0"
	I1204 15:40:24.564311   10206 logs.go:123] Gathering logs for storage-provisioner [4a54859eb1e5] ...
	I1204 15:40:24.564325   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a54859eb1e5"
	I1204 15:40:24.578354   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:40:24.578367   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:40:24.582541   10206 logs.go:123] Gathering logs for kube-apiserver [1d1ae4543cd6] ...
	I1204 15:40:24.582549   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d1ae4543cd6"
	I1204 15:40:24.620056   10206 logs.go:123] Gathering logs for etcd [4f1790594676] ...
	I1204 15:40:24.620068   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f1790594676"
	I1204 15:40:24.634711   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:40:24.634721   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:40:24.647114   10206 logs.go:123] Gathering logs for kube-apiserver [bc235f7c7828] ...
	I1204 15:40:24.647126   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc235f7c7828"
	I1204 15:40:24.661446   10206 logs.go:123] Gathering logs for kube-controller-manager [63473edefa8f] ...
	I1204 15:40:24.661457   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63473edefa8f"
	I1204 15:40:24.674271   10206 logs.go:123] Gathering logs for storage-provisioner [4bf3c6abead9] ...
	I1204 15:40:24.674282   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bf3c6abead9"
	I1204 15:40:24.685899   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:40:24.685910   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:40:27.213232   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:40:32.215568   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:40:32.215690   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:40:32.228059   10206 logs.go:282] 2 containers: [bc235f7c7828 1d1ae4543cd6]
	I1204 15:40:32.228141   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:40:32.241256   10206 logs.go:282] 2 containers: [fc48923b85fc 4f1790594676]
	I1204 15:40:32.241333   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:40:32.252162   10206 logs.go:282] 1 containers: [467f47d9f689]
	I1204 15:40:32.252248   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:40:32.263299   10206 logs.go:282] 2 containers: [1f92512b0d23 b0ad1b935d01]
	I1204 15:40:32.263383   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:40:32.274742   10206 logs.go:282] 1 containers: [e290e266aab8]
	I1204 15:40:32.274817   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:40:32.285280   10206 logs.go:282] 2 containers: [4469186baff0 63473edefa8f]
	I1204 15:40:32.285353   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:40:32.296138   10206 logs.go:282] 0 containers: []
	W1204 15:40:32.296149   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:40:32.296210   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:40:32.306682   10206 logs.go:282] 2 containers: [4bf3c6abead9 4a54859eb1e5]
	I1204 15:40:32.306702   10206 logs.go:123] Gathering logs for kube-proxy [e290e266aab8] ...
	I1204 15:40:32.306707   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e290e266aab8"
	I1204 15:40:32.318867   10206 logs.go:123] Gathering logs for kube-controller-manager [63473edefa8f] ...
	I1204 15:40:32.318877   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63473edefa8f"
	I1204 15:40:32.332966   10206 logs.go:123] Gathering logs for storage-provisioner [4bf3c6abead9] ...
	I1204 15:40:32.332977   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bf3c6abead9"
	I1204 15:40:32.344593   10206 logs.go:123] Gathering logs for kube-scheduler [1f92512b0d23] ...
	I1204 15:40:32.344604   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f92512b0d23"
	I1204 15:40:32.355970   10206 logs.go:123] Gathering logs for kube-apiserver [1d1ae4543cd6] ...
	I1204 15:40:32.355983   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d1ae4543cd6"
	I1204 15:40:32.404514   10206 logs.go:123] Gathering logs for storage-provisioner [4a54859eb1e5] ...
	I1204 15:40:32.404524   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a54859eb1e5"
	I1204 15:40:32.416474   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:40:32.416486   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:40:32.428483   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:40:32.428493   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:40:32.466557   10206 logs.go:123] Gathering logs for kube-apiserver [bc235f7c7828] ...
	I1204 15:40:32.466567   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc235f7c7828"
	I1204 15:40:32.480595   10206 logs.go:123] Gathering logs for etcd [fc48923b85fc] ...
	I1204 15:40:32.480607   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc48923b85fc"
	I1204 15:40:32.494656   10206 logs.go:123] Gathering logs for etcd [4f1790594676] ...
	I1204 15:40:32.494665   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f1790594676"
	I1204 15:40:32.508996   10206 logs.go:123] Gathering logs for coredns [467f47d9f689] ...
	I1204 15:40:32.509009   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467f47d9f689"
	I1204 15:40:32.520411   10206 logs.go:123] Gathering logs for kube-scheduler [b0ad1b935d01] ...
	I1204 15:40:32.520421   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0ad1b935d01"
	I1204 15:40:32.536901   10206 logs.go:123] Gathering logs for kube-controller-manager [4469186baff0] ...
	I1204 15:40:32.536912   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4469186baff0"
	I1204 15:40:32.554249   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:40:32.554259   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:40:32.558350   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:40:32.558356   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:40:32.580969   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:40:32.580979   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:40:35.119051   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:40:40.121563   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:40:40.121771   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:40:40.148592   10206 logs.go:282] 2 containers: [bc235f7c7828 1d1ae4543cd6]
	I1204 15:40:40.148725   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:40:40.170784   10206 logs.go:282] 2 containers: [fc48923b85fc 4f1790594676]
	I1204 15:40:40.170884   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:40:40.183166   10206 logs.go:282] 1 containers: [467f47d9f689]
	I1204 15:40:40.183248   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:40:40.196430   10206 logs.go:282] 2 containers: [1f92512b0d23 b0ad1b935d01]
	I1204 15:40:40.196506   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:40:40.206930   10206 logs.go:282] 1 containers: [e290e266aab8]
	I1204 15:40:40.207022   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:40:40.219611   10206 logs.go:282] 2 containers: [4469186baff0 63473edefa8f]
	I1204 15:40:40.219689   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:40:40.229847   10206 logs.go:282] 0 containers: []
	W1204 15:40:40.229884   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:40:40.229949   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:40:40.243850   10206 logs.go:282] 2 containers: [4bf3c6abead9 4a54859eb1e5]
	I1204 15:40:40.243866   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:40:40.243871   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:40:40.282291   10206 logs.go:123] Gathering logs for etcd [4f1790594676] ...
	I1204 15:40:40.282301   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f1790594676"
	I1204 15:40:40.297904   10206 logs.go:123] Gathering logs for coredns [467f47d9f689] ...
	I1204 15:40:40.297914   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467f47d9f689"
	I1204 15:40:40.309880   10206 logs.go:123] Gathering logs for kube-controller-manager [4469186baff0] ...
	I1204 15:40:40.309893   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4469186baff0"
	I1204 15:40:40.328593   10206 logs.go:123] Gathering logs for kube-controller-manager [63473edefa8f] ...
	I1204 15:40:40.328607   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63473edefa8f"
	I1204 15:40:40.341506   10206 logs.go:123] Gathering logs for storage-provisioner [4a54859eb1e5] ...
	I1204 15:40:40.341518   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a54859eb1e5"
	I1204 15:40:40.353369   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:40:40.353381   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:40:40.365161   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:40:40.365171   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:40:40.369766   10206 logs.go:123] Gathering logs for kube-apiserver [bc235f7c7828] ...
	I1204 15:40:40.369771   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc235f7c7828"
	I1204 15:40:40.390388   10206 logs.go:123] Gathering logs for kube-scheduler [1f92512b0d23] ...
	I1204 15:40:40.390399   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f92512b0d23"
	I1204 15:40:40.406423   10206 logs.go:123] Gathering logs for kube-scheduler [b0ad1b935d01] ...
	I1204 15:40:40.406437   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0ad1b935d01"
	I1204 15:40:40.421764   10206 logs.go:123] Gathering logs for storage-provisioner [4bf3c6abead9] ...
	I1204 15:40:40.421775   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bf3c6abead9"
	I1204 15:40:40.433834   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:40:40.433844   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:40:40.458451   10206 logs.go:123] Gathering logs for kube-apiserver [1d1ae4543cd6] ...
	I1204 15:40:40.458459   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d1ae4543cd6"
	I1204 15:40:40.496002   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:40:40.496013   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:40:40.530671   10206 logs.go:123] Gathering logs for etcd [fc48923b85fc] ...
	I1204 15:40:40.530682   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc48923b85fc"
	I1204 15:40:40.544772   10206 logs.go:123] Gathering logs for kube-proxy [e290e266aab8] ...
	I1204 15:40:40.544784   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e290e266aab8"
	I1204 15:40:43.058763   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:40:48.061518   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:40:48.061765   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:40:48.079209   10206 logs.go:282] 2 containers: [bc235f7c7828 1d1ae4543cd6]
	I1204 15:40:48.079311   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:40:48.092216   10206 logs.go:282] 2 containers: [fc48923b85fc 4f1790594676]
	I1204 15:40:48.092298   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:40:48.105436   10206 logs.go:282] 1 containers: [467f47d9f689]
	I1204 15:40:48.105510   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:40:48.116124   10206 logs.go:282] 2 containers: [1f92512b0d23 b0ad1b935d01]
	I1204 15:40:48.116208   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:40:48.126936   10206 logs.go:282] 1 containers: [e290e266aab8]
	I1204 15:40:48.127012   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:40:48.137487   10206 logs.go:282] 2 containers: [4469186baff0 63473edefa8f]
	I1204 15:40:48.137580   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:40:48.151326   10206 logs.go:282] 0 containers: []
	W1204 15:40:48.151338   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:40:48.151401   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:40:48.162105   10206 logs.go:282] 2 containers: [4bf3c6abead9 4a54859eb1e5]
	I1204 15:40:48.162124   10206 logs.go:123] Gathering logs for storage-provisioner [4a54859eb1e5] ...
	I1204 15:40:48.162128   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a54859eb1e5"
	I1204 15:40:48.173549   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:40:48.173560   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:40:48.211307   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:40:48.211319   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:40:48.246158   10206 logs.go:123] Gathering logs for etcd [fc48923b85fc] ...
	I1204 15:40:48.246172   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc48923b85fc"
	I1204 15:40:48.260318   10206 logs.go:123] Gathering logs for coredns [467f47d9f689] ...
	I1204 15:40:48.260330   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467f47d9f689"
	I1204 15:40:48.271391   10206 logs.go:123] Gathering logs for kube-proxy [e290e266aab8] ...
	I1204 15:40:48.271404   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e290e266aab8"
	I1204 15:40:48.286652   10206 logs.go:123] Gathering logs for storage-provisioner [4bf3c6abead9] ...
	I1204 15:40:48.286661   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bf3c6abead9"
	I1204 15:40:48.298901   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:40:48.298911   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:40:48.322014   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:40:48.322024   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:40:48.326444   10206 logs.go:123] Gathering logs for kube-apiserver [bc235f7c7828] ...
	I1204 15:40:48.326451   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc235f7c7828"
	I1204 15:40:48.341300   10206 logs.go:123] Gathering logs for kube-apiserver [1d1ae4543cd6] ...
	I1204 15:40:48.341312   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d1ae4543cd6"
	I1204 15:40:48.380512   10206 logs.go:123] Gathering logs for etcd [4f1790594676] ...
	I1204 15:40:48.380523   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f1790594676"
	I1204 15:40:48.394480   10206 logs.go:123] Gathering logs for kube-scheduler [b0ad1b935d01] ...
	I1204 15:40:48.394493   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0ad1b935d01"
	I1204 15:40:48.409577   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:40:48.409586   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:40:48.421697   10206 logs.go:123] Gathering logs for kube-scheduler [1f92512b0d23] ...
	I1204 15:40:48.421708   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f92512b0d23"
	I1204 15:40:48.434104   10206 logs.go:123] Gathering logs for kube-controller-manager [4469186baff0] ...
	I1204 15:40:48.434114   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4469186baff0"
	I1204 15:40:48.455743   10206 logs.go:123] Gathering logs for kube-controller-manager [63473edefa8f] ...
	I1204 15:40:48.455755   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63473edefa8f"
	I1204 15:40:50.971411   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:40:55.973879   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:40:55.974138   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:40:56.009503   10206 logs.go:282] 2 containers: [bc235f7c7828 1d1ae4543cd6]
	I1204 15:40:56.009617   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:40:56.025920   10206 logs.go:282] 2 containers: [fc48923b85fc 4f1790594676]
	I1204 15:40:56.026009   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:40:56.038550   10206 logs.go:282] 1 containers: [467f47d9f689]
	I1204 15:40:56.038623   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:40:56.049309   10206 logs.go:282] 2 containers: [1f92512b0d23 b0ad1b935d01]
	I1204 15:40:56.049394   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:40:56.059526   10206 logs.go:282] 1 containers: [e290e266aab8]
	I1204 15:40:56.059604   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:40:56.070125   10206 logs.go:282] 2 containers: [4469186baff0 63473edefa8f]
	I1204 15:40:56.070204   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:40:56.080445   10206 logs.go:282] 0 containers: []
	W1204 15:40:56.080458   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:40:56.080526   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:40:56.091036   10206 logs.go:282] 2 containers: [4bf3c6abead9 4a54859eb1e5]
	I1204 15:40:56.091054   10206 logs.go:123] Gathering logs for storage-provisioner [4a54859eb1e5] ...
	I1204 15:40:56.091059   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a54859eb1e5"
	I1204 15:40:56.102520   10206 logs.go:123] Gathering logs for kube-apiserver [bc235f7c7828] ...
	I1204 15:40:56.102531   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc235f7c7828"
	I1204 15:40:56.116475   10206 logs.go:123] Gathering logs for kube-apiserver [1d1ae4543cd6] ...
	I1204 15:40:56.116486   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d1ae4543cd6"
	I1204 15:40:56.155582   10206 logs.go:123] Gathering logs for etcd [fc48923b85fc] ...
	I1204 15:40:56.155594   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc48923b85fc"
	I1204 15:40:56.170185   10206 logs.go:123] Gathering logs for coredns [467f47d9f689] ...
	I1204 15:40:56.170198   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467f47d9f689"
	I1204 15:40:56.184430   10206 logs.go:123] Gathering logs for kube-proxy [e290e266aab8] ...
	I1204 15:40:56.184440   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e290e266aab8"
	I1204 15:40:56.197582   10206 logs.go:123] Gathering logs for kube-controller-manager [4469186baff0] ...
	I1204 15:40:56.197593   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4469186baff0"
	I1204 15:40:56.215256   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:40:56.215269   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:40:56.219550   10206 logs.go:123] Gathering logs for etcd [4f1790594676] ...
	I1204 15:40:56.219559   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f1790594676"
	I1204 15:40:56.234454   10206 logs.go:123] Gathering logs for kube-scheduler [1f92512b0d23] ...
	I1204 15:40:56.234464   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f92512b0d23"
	I1204 15:40:56.246281   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:40:56.246294   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:40:56.283210   10206 logs.go:123] Gathering logs for kube-scheduler [b0ad1b935d01] ...
	I1204 15:40:56.283219   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0ad1b935d01"
	I1204 15:40:56.298744   10206 logs.go:123] Gathering logs for kube-controller-manager [63473edefa8f] ...
	I1204 15:40:56.298759   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63473edefa8f"
	I1204 15:40:56.311180   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:40:56.311190   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:40:56.327022   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:40:56.327034   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:40:56.364034   10206 logs.go:123] Gathering logs for storage-provisioner [4bf3c6abead9] ...
	I1204 15:40:56.364045   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bf3c6abead9"
	I1204 15:40:56.375945   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:40:56.375956   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:40:58.901458   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:41:03.903875   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:41:03.904128   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:41:03.932347   10206 logs.go:282] 2 containers: [bc235f7c7828 1d1ae4543cd6]
	I1204 15:41:03.932471   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:41:03.950474   10206 logs.go:282] 2 containers: [fc48923b85fc 4f1790594676]
	I1204 15:41:03.950571   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:41:03.964100   10206 logs.go:282] 1 containers: [467f47d9f689]
	I1204 15:41:03.964171   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:41:03.976453   10206 logs.go:282] 2 containers: [1f92512b0d23 b0ad1b935d01]
	I1204 15:41:03.976535   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:41:03.986968   10206 logs.go:282] 1 containers: [e290e266aab8]
	I1204 15:41:03.987039   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:41:03.998885   10206 logs.go:282] 2 containers: [4469186baff0 63473edefa8f]
	I1204 15:41:03.998963   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:41:04.013564   10206 logs.go:282] 0 containers: []
	W1204 15:41:04.013580   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:41:04.013647   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:41:04.024222   10206 logs.go:282] 2 containers: [4bf3c6abead9 4a54859eb1e5]
	I1204 15:41:04.024241   10206 logs.go:123] Gathering logs for etcd [4f1790594676] ...
	I1204 15:41:04.024246   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f1790594676"
	I1204 15:41:04.039808   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:41:04.039821   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:41:04.076335   10206 logs.go:123] Gathering logs for kube-proxy [e290e266aab8] ...
	I1204 15:41:04.076347   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e290e266aab8"
	I1204 15:41:04.088598   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:41:04.088611   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:41:04.112116   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:41:04.112128   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:41:04.124419   10206 logs.go:123] Gathering logs for kube-controller-manager [63473edefa8f] ...
	I1204 15:41:04.124430   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63473edefa8f"
	I1204 15:41:04.136600   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:41:04.136610   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:41:04.173909   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:41:04.173917   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:41:04.178260   10206 logs.go:123] Gathering logs for kube-apiserver [bc235f7c7828] ...
	I1204 15:41:04.178269   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc235f7c7828"
	I1204 15:41:04.192034   10206 logs.go:123] Gathering logs for kube-apiserver [1d1ae4543cd6] ...
	I1204 15:41:04.192044   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d1ae4543cd6"
	I1204 15:41:04.231430   10206 logs.go:123] Gathering logs for coredns [467f47d9f689] ...
	I1204 15:41:04.231440   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467f47d9f689"
	I1204 15:41:04.242611   10206 logs.go:123] Gathering logs for kube-scheduler [1f92512b0d23] ...
	I1204 15:41:04.242623   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f92512b0d23"
	I1204 15:41:04.254247   10206 logs.go:123] Gathering logs for etcd [fc48923b85fc] ...
	I1204 15:41:04.254261   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc48923b85fc"
	I1204 15:41:04.267893   10206 logs.go:123] Gathering logs for kube-scheduler [b0ad1b935d01] ...
	I1204 15:41:04.267906   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0ad1b935d01"
	I1204 15:41:04.283177   10206 logs.go:123] Gathering logs for kube-controller-manager [4469186baff0] ...
	I1204 15:41:04.283190   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4469186baff0"
	I1204 15:41:04.306921   10206 logs.go:123] Gathering logs for storage-provisioner [4bf3c6abead9] ...
	I1204 15:41:04.306931   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bf3c6abead9"
	I1204 15:41:04.323758   10206 logs.go:123] Gathering logs for storage-provisioner [4a54859eb1e5] ...
	I1204 15:41:04.323771   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a54859eb1e5"
	I1204 15:41:06.837076   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:41:11.839439   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:41:11.839653   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:41:11.860842   10206 logs.go:282] 2 containers: [bc235f7c7828 1d1ae4543cd6]
	I1204 15:41:11.860951   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:41:11.875923   10206 logs.go:282] 2 containers: [fc48923b85fc 4f1790594676]
	I1204 15:41:11.876016   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:41:11.888157   10206 logs.go:282] 1 containers: [467f47d9f689]
	I1204 15:41:11.888238   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:41:11.899595   10206 logs.go:282] 2 containers: [1f92512b0d23 b0ad1b935d01]
	I1204 15:41:11.899677   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:41:11.910289   10206 logs.go:282] 1 containers: [e290e266aab8]
	I1204 15:41:11.910362   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:41:11.920782   10206 logs.go:282] 2 containers: [4469186baff0 63473edefa8f]
	I1204 15:41:11.920862   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:41:11.930814   10206 logs.go:282] 0 containers: []
	W1204 15:41:11.930824   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:41:11.930881   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:41:11.941242   10206 logs.go:282] 2 containers: [4bf3c6abead9 4a54859eb1e5]
	I1204 15:41:11.941268   10206 logs.go:123] Gathering logs for etcd [fc48923b85fc] ...
	I1204 15:41:11.941274   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc48923b85fc"
	I1204 15:41:11.955336   10206 logs.go:123] Gathering logs for kube-scheduler [b0ad1b935d01] ...
	I1204 15:41:11.955346   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0ad1b935d01"
	I1204 15:41:11.970691   10206 logs.go:123] Gathering logs for kube-controller-manager [4469186baff0] ...
	I1204 15:41:11.970701   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4469186baff0"
	I1204 15:41:11.989516   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:41:11.989536   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:41:12.014256   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:41:12.014267   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:41:12.053787   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:41:12.053796   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:41:12.058467   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:41:12.058476   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:41:12.092499   10206 logs.go:123] Gathering logs for kube-apiserver [bc235f7c7828] ...
	I1204 15:41:12.092509   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc235f7c7828"
	I1204 15:41:12.106904   10206 logs.go:123] Gathering logs for kube-apiserver [1d1ae4543cd6] ...
	I1204 15:41:12.106915   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d1ae4543cd6"
	I1204 15:41:12.146397   10206 logs.go:123] Gathering logs for coredns [467f47d9f689] ...
	I1204 15:41:12.146410   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467f47d9f689"
	I1204 15:41:12.164469   10206 logs.go:123] Gathering logs for storage-provisioner [4bf3c6abead9] ...
	I1204 15:41:12.164483   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bf3c6abead9"
	I1204 15:41:12.175727   10206 logs.go:123] Gathering logs for kube-scheduler [1f92512b0d23] ...
	I1204 15:41:12.175741   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f92512b0d23"
	I1204 15:41:12.187731   10206 logs.go:123] Gathering logs for kube-proxy [e290e266aab8] ...
	I1204 15:41:12.187741   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e290e266aab8"
	I1204 15:41:12.199472   10206 logs.go:123] Gathering logs for kube-controller-manager [63473edefa8f] ...
	I1204 15:41:12.199485   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63473edefa8f"
	I1204 15:41:12.214817   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:41:12.214827   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:41:12.227777   10206 logs.go:123] Gathering logs for etcd [4f1790594676] ...
	I1204 15:41:12.227788   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f1790594676"
	I1204 15:41:12.245935   10206 logs.go:123] Gathering logs for storage-provisioner [4a54859eb1e5] ...
	I1204 15:41:12.245948   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a54859eb1e5"
	I1204 15:41:14.759368   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:41:19.761785   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:41:19.762055   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:41:19.789441   10206 logs.go:282] 2 containers: [bc235f7c7828 1d1ae4543cd6]
	I1204 15:41:19.789570   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:41:19.806013   10206 logs.go:282] 2 containers: [fc48923b85fc 4f1790594676]
	I1204 15:41:19.806111   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:41:19.819318   10206 logs.go:282] 1 containers: [467f47d9f689]
	I1204 15:41:19.819401   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:41:19.839752   10206 logs.go:282] 2 containers: [1f92512b0d23 b0ad1b935d01]
	I1204 15:41:19.839832   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:41:19.850803   10206 logs.go:282] 1 containers: [e290e266aab8]
	I1204 15:41:19.850879   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:41:19.867852   10206 logs.go:282] 2 containers: [4469186baff0 63473edefa8f]
	I1204 15:41:19.867923   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:41:19.878228   10206 logs.go:282] 0 containers: []
	W1204 15:41:19.878241   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:41:19.878301   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:41:19.888782   10206 logs.go:282] 2 containers: [4bf3c6abead9 4a54859eb1e5]
	I1204 15:41:19.888800   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:41:19.888806   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:41:19.893678   10206 logs.go:123] Gathering logs for kube-apiserver [1d1ae4543cd6] ...
	I1204 15:41:19.893684   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d1ae4543cd6"
	I1204 15:41:19.932569   10206 logs.go:123] Gathering logs for etcd [fc48923b85fc] ...
	I1204 15:41:19.932581   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc48923b85fc"
	I1204 15:41:19.946304   10206 logs.go:123] Gathering logs for etcd [4f1790594676] ...
	I1204 15:41:19.946314   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f1790594676"
	I1204 15:41:19.960840   10206 logs.go:123] Gathering logs for kube-scheduler [1f92512b0d23] ...
	I1204 15:41:19.960852   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f92512b0d23"
	I1204 15:41:19.972521   10206 logs.go:123] Gathering logs for kube-scheduler [b0ad1b935d01] ...
	I1204 15:41:19.972533   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0ad1b935d01"
	I1204 15:41:19.992248   10206 logs.go:123] Gathering logs for kube-controller-manager [63473edefa8f] ...
	I1204 15:41:19.992260   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63473edefa8f"
	I1204 15:41:20.004704   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:41:20.004717   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:41:20.027448   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:41:20.027457   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:41:20.062516   10206 logs.go:123] Gathering logs for kube-proxy [e290e266aab8] ...
	I1204 15:41:20.062526   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e290e266aab8"
	I1204 15:41:20.074818   10206 logs.go:123] Gathering logs for storage-provisioner [4bf3c6abead9] ...
	I1204 15:41:20.074827   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bf3c6abead9"
	I1204 15:41:20.086706   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:41:20.086715   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:41:20.124592   10206 logs.go:123] Gathering logs for kube-apiserver [bc235f7c7828] ...
	I1204 15:41:20.124602   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc235f7c7828"
	I1204 15:41:20.138194   10206 logs.go:123] Gathering logs for coredns [467f47d9f689] ...
	I1204 15:41:20.138210   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467f47d9f689"
	I1204 15:41:20.149771   10206 logs.go:123] Gathering logs for storage-provisioner [4a54859eb1e5] ...
	I1204 15:41:20.149784   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a54859eb1e5"
	I1204 15:41:20.172433   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:41:20.172445   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:41:20.185399   10206 logs.go:123] Gathering logs for kube-controller-manager [4469186baff0] ...
	I1204 15:41:20.185409   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4469186baff0"
	I1204 15:41:22.710798   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:41:27.711904   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:41:27.712039   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:41:27.723839   10206 logs.go:282] 2 containers: [bc235f7c7828 1d1ae4543cd6]
	I1204 15:41:27.723922   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:41:27.734092   10206 logs.go:282] 2 containers: [fc48923b85fc 4f1790594676]
	I1204 15:41:27.734173   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:41:27.745781   10206 logs.go:282] 1 containers: [467f47d9f689]
	I1204 15:41:27.745858   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:41:27.756200   10206 logs.go:282] 2 containers: [1f92512b0d23 b0ad1b935d01]
	I1204 15:41:27.756281   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:41:27.767062   10206 logs.go:282] 1 containers: [e290e266aab8]
	I1204 15:41:27.767147   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:41:27.778107   10206 logs.go:282] 2 containers: [4469186baff0 63473edefa8f]
	I1204 15:41:27.778182   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:41:27.788167   10206 logs.go:282] 0 containers: []
	W1204 15:41:27.788181   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:41:27.788254   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:41:27.800007   10206 logs.go:282] 2 containers: [4bf3c6abead9 4a54859eb1e5]
	I1204 15:41:27.800027   10206 logs.go:123] Gathering logs for kube-scheduler [1f92512b0d23] ...
	I1204 15:41:27.800032   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f92512b0d23"
	I1204 15:41:27.811798   10206 logs.go:123] Gathering logs for kube-scheduler [b0ad1b935d01] ...
	I1204 15:41:27.811810   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0ad1b935d01"
	I1204 15:41:27.826433   10206 logs.go:123] Gathering logs for storage-provisioner [4bf3c6abead9] ...
	I1204 15:41:27.826442   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bf3c6abead9"
	I1204 15:41:27.837920   10206 logs.go:123] Gathering logs for storage-provisioner [4a54859eb1e5] ...
	I1204 15:41:27.837931   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a54859eb1e5"
	I1204 15:41:27.849494   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:41:27.849505   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:41:27.873788   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:41:27.873796   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:41:27.908412   10206 logs.go:123] Gathering logs for coredns [467f47d9f689] ...
	I1204 15:41:27.908425   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467f47d9f689"
	I1204 15:41:27.921722   10206 logs.go:123] Gathering logs for etcd [fc48923b85fc] ...
	I1204 15:41:27.921733   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc48923b85fc"
	I1204 15:41:27.935540   10206 logs.go:123] Gathering logs for etcd [4f1790594676] ...
	I1204 15:41:27.935553   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f1790594676"
	I1204 15:41:27.950340   10206 logs.go:123] Gathering logs for kube-controller-manager [4469186baff0] ...
	I1204 15:41:27.950349   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4469186baff0"
	I1204 15:41:27.968478   10206 logs.go:123] Gathering logs for kube-controller-manager [63473edefa8f] ...
	I1204 15:41:27.968488   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63473edefa8f"
	I1204 15:41:27.980969   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:41:27.980979   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:41:27.996076   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:41:27.996087   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:41:28.000274   10206 logs.go:123] Gathering logs for kube-apiserver [bc235f7c7828] ...
	I1204 15:41:28.000283   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc235f7c7828"
	I1204 15:41:28.014781   10206 logs.go:123] Gathering logs for kube-proxy [e290e266aab8] ...
	I1204 15:41:28.014790   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e290e266aab8"
	I1204 15:41:28.026911   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:41:28.026924   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:41:28.064434   10206 logs.go:123] Gathering logs for kube-apiserver [1d1ae4543cd6] ...
	I1204 15:41:28.064442   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d1ae4543cd6"
	I1204 15:41:30.603906   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:41:35.606334   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:41:35.606785   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:41:35.669138   10206 logs.go:282] 2 containers: [bc235f7c7828 1d1ae4543cd6]
	I1204 15:41:35.669252   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:41:35.697918   10206 logs.go:282] 2 containers: [fc48923b85fc 4f1790594676]
	I1204 15:41:35.698088   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:41:35.719009   10206 logs.go:282] 1 containers: [467f47d9f689]
	I1204 15:41:35.719093   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:41:35.730086   10206 logs.go:282] 2 containers: [1f92512b0d23 b0ad1b935d01]
	I1204 15:41:35.730169   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:41:35.741175   10206 logs.go:282] 1 containers: [e290e266aab8]
	I1204 15:41:35.741259   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:41:35.751528   10206 logs.go:282] 2 containers: [4469186baff0 63473edefa8f]
	I1204 15:41:35.751606   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:41:35.761769   10206 logs.go:282] 0 containers: []
	W1204 15:41:35.761782   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:41:35.761846   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:41:35.772561   10206 logs.go:282] 2 containers: [4bf3c6abead9 4a54859eb1e5]
	I1204 15:41:35.772578   10206 logs.go:123] Gathering logs for storage-provisioner [4a54859eb1e5] ...
	I1204 15:41:35.772582   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a54859eb1e5"
	I1204 15:41:35.783924   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:41:35.783936   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:41:35.807224   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:41:35.807233   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:41:35.846496   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:41:35.846506   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:41:35.850923   10206 logs.go:123] Gathering logs for etcd [4f1790594676] ...
	I1204 15:41:35.850932   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f1790594676"
	I1204 15:41:35.865352   10206 logs.go:123] Gathering logs for kube-proxy [e290e266aab8] ...
	I1204 15:41:35.865362   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e290e266aab8"
	I1204 15:41:35.876963   10206 logs.go:123] Gathering logs for storage-provisioner [4bf3c6abead9] ...
	I1204 15:41:35.876977   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bf3c6abead9"
	I1204 15:41:35.888334   10206 logs.go:123] Gathering logs for kube-apiserver [1d1ae4543cd6] ...
	I1204 15:41:35.888345   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d1ae4543cd6"
	I1204 15:41:35.927698   10206 logs.go:123] Gathering logs for kube-scheduler [1f92512b0d23] ...
	I1204 15:41:35.927710   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f92512b0d23"
	I1204 15:41:35.940052   10206 logs.go:123] Gathering logs for kube-controller-manager [4469186baff0] ...
	I1204 15:41:35.940066   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4469186baff0"
	I1204 15:41:35.956780   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:41:35.956790   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:41:35.997729   10206 logs.go:123] Gathering logs for kube-apiserver [bc235f7c7828] ...
	I1204 15:41:35.997741   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc235f7c7828"
	I1204 15:41:36.011714   10206 logs.go:123] Gathering logs for etcd [fc48923b85fc] ...
	I1204 15:41:36.011726   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc48923b85fc"
	I1204 15:41:36.025460   10206 logs.go:123] Gathering logs for coredns [467f47d9f689] ...
	I1204 15:41:36.025470   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467f47d9f689"
	I1204 15:41:36.037074   10206 logs.go:123] Gathering logs for kube-scheduler [b0ad1b935d01] ...
	I1204 15:41:36.037085   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0ad1b935d01"
	I1204 15:41:36.052673   10206 logs.go:123] Gathering logs for kube-controller-manager [63473edefa8f] ...
	I1204 15:41:36.052685   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63473edefa8f"
	I1204 15:41:36.067130   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:41:36.067142   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:41:38.581602   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:41:43.584425   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:41:43.584962   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:41:43.625945   10206 logs.go:282] 2 containers: [bc235f7c7828 1d1ae4543cd6]
	I1204 15:41:43.626097   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:41:43.648177   10206 logs.go:282] 2 containers: [fc48923b85fc 4f1790594676]
	I1204 15:41:43.648312   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:41:43.664415   10206 logs.go:282] 1 containers: [467f47d9f689]
	I1204 15:41:43.664495   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:41:43.676916   10206 logs.go:282] 2 containers: [1f92512b0d23 b0ad1b935d01]
	I1204 15:41:43.676995   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:41:43.688597   10206 logs.go:282] 1 containers: [e290e266aab8]
	I1204 15:41:43.688670   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:41:43.700140   10206 logs.go:282] 2 containers: [4469186baff0 63473edefa8f]
	I1204 15:41:43.700224   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:41:43.723670   10206 logs.go:282] 0 containers: []
	W1204 15:41:43.723683   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:41:43.723760   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:41:43.734670   10206 logs.go:282] 2 containers: [4bf3c6abead9 4a54859eb1e5]
	I1204 15:41:43.734689   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:41:43.734694   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:41:43.739042   10206 logs.go:123] Gathering logs for kube-controller-manager [4469186baff0] ...
	I1204 15:41:43.739049   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4469186baff0"
	I1204 15:41:43.757243   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:41:43.757257   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:41:43.778920   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:41:43.778931   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:41:43.790738   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:41:43.790751   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:41:43.827784   10206 logs.go:123] Gathering logs for coredns [467f47d9f689] ...
	I1204 15:41:43.827794   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467f47d9f689"
	I1204 15:41:43.838881   10206 logs.go:123] Gathering logs for kube-scheduler [b0ad1b935d01] ...
	I1204 15:41:43.838897   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0ad1b935d01"
	I1204 15:41:43.857119   10206 logs.go:123] Gathering logs for storage-provisioner [4bf3c6abead9] ...
	I1204 15:41:43.857131   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bf3c6abead9"
	I1204 15:41:43.873042   10206 logs.go:123] Gathering logs for storage-provisioner [4a54859eb1e5] ...
	I1204 15:41:43.873055   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a54859eb1e5"
	I1204 15:41:43.884452   10206 logs.go:123] Gathering logs for kube-apiserver [bc235f7c7828] ...
	I1204 15:41:43.884463   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc235f7c7828"
	I1204 15:41:43.899020   10206 logs.go:123] Gathering logs for kube-apiserver [1d1ae4543cd6] ...
	I1204 15:41:43.899033   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d1ae4543cd6"
	I1204 15:41:43.938126   10206 logs.go:123] Gathering logs for kube-proxy [e290e266aab8] ...
	I1204 15:41:43.938138   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e290e266aab8"
	I1204 15:41:43.949675   10206 logs.go:123] Gathering logs for kube-controller-manager [63473edefa8f] ...
	I1204 15:41:43.949686   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63473edefa8f"
	I1204 15:41:43.962160   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:41:43.962173   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:41:44.002054   10206 logs.go:123] Gathering logs for etcd [fc48923b85fc] ...
	I1204 15:41:44.002067   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc48923b85fc"
	I1204 15:41:44.016854   10206 logs.go:123] Gathering logs for etcd [4f1790594676] ...
	I1204 15:41:44.016865   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f1790594676"
	I1204 15:41:44.032096   10206 logs.go:123] Gathering logs for kube-scheduler [1f92512b0d23] ...
	I1204 15:41:44.032106   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f92512b0d23"
	I1204 15:41:46.545809   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:41:51.548261   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:41:51.548750   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:41:51.586966   10206 logs.go:282] 2 containers: [bc235f7c7828 1d1ae4543cd6]
	I1204 15:41:51.587123   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:41:51.607849   10206 logs.go:282] 2 containers: [fc48923b85fc 4f1790594676]
	I1204 15:41:51.607957   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:41:51.623675   10206 logs.go:282] 1 containers: [467f47d9f689]
	I1204 15:41:51.623780   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:41:51.641374   10206 logs.go:282] 2 containers: [1f92512b0d23 b0ad1b935d01]
	I1204 15:41:51.641458   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:41:51.652168   10206 logs.go:282] 1 containers: [e290e266aab8]
	I1204 15:41:51.652248   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:41:51.662532   10206 logs.go:282] 2 containers: [4469186baff0 63473edefa8f]
	I1204 15:41:51.662608   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:41:51.672749   10206 logs.go:282] 0 containers: []
	W1204 15:41:51.672759   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:41:51.672820   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:41:51.683251   10206 logs.go:282] 2 containers: [4bf3c6abead9 4a54859eb1e5]
	I1204 15:41:51.683271   10206 logs.go:123] Gathering logs for kube-apiserver [bc235f7c7828] ...
	I1204 15:41:51.683277   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc235f7c7828"
	I1204 15:41:51.698752   10206 logs.go:123] Gathering logs for kube-scheduler [b0ad1b935d01] ...
	I1204 15:41:51.698761   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0ad1b935d01"
	I1204 15:41:51.714170   10206 logs.go:123] Gathering logs for storage-provisioner [4a54859eb1e5] ...
	I1204 15:41:51.714181   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a54859eb1e5"
	I1204 15:41:51.725646   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:41:51.725657   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:41:51.765631   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:41:51.765642   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:41:51.803044   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:41:51.803057   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:41:51.827878   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:41:51.827902   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:41:51.840494   10206 logs.go:123] Gathering logs for kube-apiserver [1d1ae4543cd6] ...
	I1204 15:41:51.840514   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d1ae4543cd6"
	I1204 15:41:51.878988   10206 logs.go:123] Gathering logs for etcd [fc48923b85fc] ...
	I1204 15:41:51.879000   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc48923b85fc"
	I1204 15:41:51.893312   10206 logs.go:123] Gathering logs for kube-controller-manager [63473edefa8f] ...
	I1204 15:41:51.893322   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63473edefa8f"
	I1204 15:41:51.905707   10206 logs.go:123] Gathering logs for etcd [4f1790594676] ...
	I1204 15:41:51.905719   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f1790594676"
	I1204 15:41:51.920440   10206 logs.go:123] Gathering logs for kube-scheduler [1f92512b0d23] ...
	I1204 15:41:51.920454   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f92512b0d23"
	I1204 15:41:51.932692   10206 logs.go:123] Gathering logs for kube-proxy [e290e266aab8] ...
	I1204 15:41:51.932704   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e290e266aab8"
	I1204 15:41:51.944334   10206 logs.go:123] Gathering logs for kube-controller-manager [4469186baff0] ...
	I1204 15:41:51.944346   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4469186baff0"
	I1204 15:41:51.962021   10206 logs.go:123] Gathering logs for storage-provisioner [4bf3c6abead9] ...
	I1204 15:41:51.962031   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bf3c6abead9"
	I1204 15:41:51.973682   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:41:51.973693   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:41:51.978257   10206 logs.go:123] Gathering logs for coredns [467f47d9f689] ...
	I1204 15:41:51.978265   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467f47d9f689"
	I1204 15:41:54.494476   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:41:59.497100   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:41:59.497343   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:41:59.516928   10206 logs.go:282] 2 containers: [bc235f7c7828 1d1ae4543cd6]
	I1204 15:41:59.517041   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:41:59.533850   10206 logs.go:282] 2 containers: [fc48923b85fc 4f1790594676]
	I1204 15:41:59.533931   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:41:59.549622   10206 logs.go:282] 1 containers: [467f47d9f689]
	I1204 15:41:59.549704   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:41:59.560352   10206 logs.go:282] 2 containers: [1f92512b0d23 b0ad1b935d01]
	I1204 15:41:59.560435   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:41:59.570844   10206 logs.go:282] 1 containers: [e290e266aab8]
	I1204 15:41:59.570917   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:41:59.581469   10206 logs.go:282] 2 containers: [4469186baff0 63473edefa8f]
	I1204 15:41:59.581546   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:41:59.591395   10206 logs.go:282] 0 containers: []
	W1204 15:41:59.591409   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:41:59.591469   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:41:59.602676   10206 logs.go:282] 2 containers: [4bf3c6abead9 4a54859eb1e5]
	I1204 15:41:59.602700   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:41:59.602706   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:41:59.637886   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:41:59.637896   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:41:59.642845   10206 logs.go:123] Gathering logs for etcd [fc48923b85fc] ...
	I1204 15:41:59.642855   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc48923b85fc"
	I1204 15:41:59.657177   10206 logs.go:123] Gathering logs for kube-scheduler [b0ad1b935d01] ...
	I1204 15:41:59.657186   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0ad1b935d01"
	I1204 15:41:59.672078   10206 logs.go:123] Gathering logs for kube-controller-manager [63473edefa8f] ...
	I1204 15:41:59.672090   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63473edefa8f"
	I1204 15:41:59.684584   10206 logs.go:123] Gathering logs for storage-provisioner [4a54859eb1e5] ...
	I1204 15:41:59.684594   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a54859eb1e5"
	I1204 15:41:59.696127   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:41:59.696139   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:41:59.735053   10206 logs.go:123] Gathering logs for kube-scheduler [1f92512b0d23] ...
	I1204 15:41:59.735062   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f92512b0d23"
	I1204 15:41:59.746898   10206 logs.go:123] Gathering logs for kube-proxy [e290e266aab8] ...
	I1204 15:41:59.746909   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e290e266aab8"
	I1204 15:41:59.759005   10206 logs.go:123] Gathering logs for storage-provisioner [4bf3c6abead9] ...
	I1204 15:41:59.759017   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bf3c6abead9"
	I1204 15:41:59.770568   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:41:59.770581   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:41:59.794183   10206 logs.go:123] Gathering logs for kube-apiserver [bc235f7c7828] ...
	I1204 15:41:59.794196   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc235f7c7828"
	I1204 15:41:59.810417   10206 logs.go:123] Gathering logs for kube-apiserver [1d1ae4543cd6] ...
	I1204 15:41:59.810430   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d1ae4543cd6"
	I1204 15:41:59.873142   10206 logs.go:123] Gathering logs for etcd [4f1790594676] ...
	I1204 15:41:59.873162   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f1790594676"
	I1204 15:41:59.887779   10206 logs.go:123] Gathering logs for coredns [467f47d9f689] ...
	I1204 15:41:59.887793   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467f47d9f689"
	I1204 15:41:59.899035   10206 logs.go:123] Gathering logs for kube-controller-manager [4469186baff0] ...
	I1204 15:41:59.899047   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4469186baff0"
	I1204 15:41:59.916655   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:41:59.916668   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:42:02.431798   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:42:07.434181   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:42:07.434462   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:42:07.465750   10206 logs.go:282] 2 containers: [bc235f7c7828 1d1ae4543cd6]
	I1204 15:42:07.465900   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:42:07.486399   10206 logs.go:282] 2 containers: [fc48923b85fc 4f1790594676]
	I1204 15:42:07.486524   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:42:07.500547   10206 logs.go:282] 1 containers: [467f47d9f689]
	I1204 15:42:07.500645   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:42:07.513447   10206 logs.go:282] 2 containers: [1f92512b0d23 b0ad1b935d01]
	I1204 15:42:07.513530   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:42:07.524200   10206 logs.go:282] 1 containers: [e290e266aab8]
	I1204 15:42:07.524273   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:42:07.535441   10206 logs.go:282] 2 containers: [4469186baff0 63473edefa8f]
	I1204 15:42:07.535524   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:42:07.548230   10206 logs.go:282] 0 containers: []
	W1204 15:42:07.548244   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:42:07.548309   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:42:07.558969   10206 logs.go:282] 2 containers: [4bf3c6abead9 4a54859eb1e5]
	I1204 15:42:07.558986   10206 logs.go:123] Gathering logs for etcd [4f1790594676] ...
	I1204 15:42:07.558991   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f1790594676"
	I1204 15:42:07.580511   10206 logs.go:123] Gathering logs for kube-scheduler [1f92512b0d23] ...
	I1204 15:42:07.580524   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f92512b0d23"
	I1204 15:42:07.592574   10206 logs.go:123] Gathering logs for kube-controller-manager [4469186baff0] ...
	I1204 15:42:07.592587   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4469186baff0"
	I1204 15:42:07.610590   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:42:07.610600   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:42:07.632876   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:42:07.632886   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:42:07.644781   10206 logs.go:123] Gathering logs for kube-apiserver [bc235f7c7828] ...
	I1204 15:42:07.644793   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bc235f7c7828"
	I1204 15:42:07.659131   10206 logs.go:123] Gathering logs for etcd [fc48923b85fc] ...
	I1204 15:42:07.659141   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fc48923b85fc"
	I1204 15:42:07.672669   10206 logs.go:123] Gathering logs for kube-scheduler [b0ad1b935d01] ...
	I1204 15:42:07.672681   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b0ad1b935d01"
	I1204 15:42:07.688974   10206 logs.go:123] Gathering logs for kube-proxy [e290e266aab8] ...
	I1204 15:42:07.688986   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e290e266aab8"
	I1204 15:42:07.709871   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:42:07.709885   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:42:07.714733   10206 logs.go:123] Gathering logs for kube-apiserver [1d1ae4543cd6] ...
	I1204 15:42:07.714739   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d1ae4543cd6"
	I1204 15:42:07.760376   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:42:07.760386   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:42:07.794462   10206 logs.go:123] Gathering logs for storage-provisioner [4a54859eb1e5] ...
	I1204 15:42:07.794473   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4a54859eb1e5"
	I1204 15:42:07.813366   10206 logs.go:123] Gathering logs for kube-controller-manager [63473edefa8f] ...
	I1204 15:42:07.813380   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 63473edefa8f"
	I1204 15:42:07.825636   10206 logs.go:123] Gathering logs for storage-provisioner [4bf3c6abead9] ...
	I1204 15:42:07.825647   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4bf3c6abead9"
	I1204 15:42:07.837301   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:42:07.837313   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:42:07.877499   10206 logs.go:123] Gathering logs for coredns [467f47d9f689] ...
	I1204 15:42:07.877514   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 467f47d9f689"
	I1204 15:42:10.391164   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:42:15.393876   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:42:15.394079   10206 kubeadm.go:597] duration metric: took 4m4.520836208s to restartPrimaryControlPlane
	W1204 15:42:15.394230   10206 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1204 15:42:15.394293   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1204 15:42:16.470838   10206 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.076517292s)
	I1204 15:42:16.470918   10206 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 15:42:16.476126   10206 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 15:42:16.479146   10206 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 15:42:16.482062   10206 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 15:42:16.482067   10206 kubeadm.go:157] found existing configuration files:
	
	I1204 15:42:16.482098   10206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61834 /etc/kubernetes/admin.conf
	I1204 15:42:16.484901   10206 kubeadm.go:163] "https://control-plane.minikube.internal:61834" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:61834 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 15:42:16.484936   10206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 15:42:16.487620   10206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61834 /etc/kubernetes/kubelet.conf
	I1204 15:42:16.490662   10206 kubeadm.go:163] "https://control-plane.minikube.internal:61834" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:61834 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 15:42:16.490698   10206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 15:42:16.494318   10206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61834 /etc/kubernetes/controller-manager.conf
	I1204 15:42:16.497290   10206 kubeadm.go:163] "https://control-plane.minikube.internal:61834" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:61834 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 15:42:16.497316   10206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 15:42:16.499876   10206 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:61834 /etc/kubernetes/scheduler.conf
	I1204 15:42:16.502884   10206 kubeadm.go:163] "https://control-plane.minikube.internal:61834" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:61834 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 15:42:16.502912   10206 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 15:42:16.506199   10206 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1204 15:42:16.524083   10206 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1204 15:42:16.524115   10206 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 15:42:16.571595   10206 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 15:42:16.571657   10206 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 15:42:16.571709   10206 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1204 15:42:16.625265   10206 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 15:42:16.629404   10206 out.go:235]   - Generating certificates and keys ...
	I1204 15:42:16.629445   10206 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 15:42:16.629479   10206 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 15:42:16.629516   10206 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1204 15:42:16.629545   10206 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1204 15:42:16.629594   10206 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1204 15:42:16.629626   10206 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1204 15:42:16.629661   10206 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1204 15:42:16.629695   10206 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1204 15:42:16.629735   10206 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1204 15:42:16.629773   10206 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1204 15:42:16.629794   10206 kubeadm.go:310] [certs] Using the existing "sa" key
	I1204 15:42:16.629833   10206 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 15:42:16.705723   10206 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 15:42:16.787165   10206 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 15:42:16.897829   10206 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 15:42:16.967000   10206 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 15:42:16.997538   10206 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 15:42:16.997975   10206 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 15:42:16.997996   10206 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 15:42:17.076203   10206 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 15:42:17.080484   10206 out.go:235]   - Booting up control plane ...
	I1204 15:42:17.080531   10206 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 15:42:17.080575   10206 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 15:42:17.080613   10206 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 15:42:17.080658   10206 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 15:42:17.081662   10206 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1204 15:42:22.085491   10206 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.004297 seconds
	I1204 15:42:22.085579   10206 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1204 15:42:22.091230   10206 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1204 15:42:22.602866   10206 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1204 15:42:22.603024   10206 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-377000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1204 15:42:23.110469   10206 kubeadm.go:310] [bootstrap-token] Using token: 1dn43k.o5d3nczgwbr8kvhs
	I1204 15:42:23.116508   10206 out.go:235]   - Configuring RBAC rules ...
	I1204 15:42:23.116632   10206 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1204 15:42:23.116715   10206 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1204 15:42:23.125883   10206 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1204 15:42:23.127648   10206 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1204 15:42:23.129122   10206 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1204 15:42:23.130499   10206 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1204 15:42:23.135736   10206 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1204 15:42:23.309371   10206 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1204 15:42:23.516170   10206 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1204 15:42:23.516723   10206 kubeadm.go:310] 
	I1204 15:42:23.516759   10206 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1204 15:42:23.516763   10206 kubeadm.go:310] 
	I1204 15:42:23.516809   10206 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1204 15:42:23.516816   10206 kubeadm.go:310] 
	I1204 15:42:23.516830   10206 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1204 15:42:23.516870   10206 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1204 15:42:23.516899   10206 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1204 15:42:23.516902   10206 kubeadm.go:310] 
	I1204 15:42:23.516935   10206 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1204 15:42:23.516971   10206 kubeadm.go:310] 
	I1204 15:42:23.516999   10206 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1204 15:42:23.517003   10206 kubeadm.go:310] 
	I1204 15:42:23.517050   10206 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1204 15:42:23.517092   10206 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1204 15:42:23.517133   10206 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1204 15:42:23.517136   10206 kubeadm.go:310] 
	I1204 15:42:23.517232   10206 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1204 15:42:23.517335   10206 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1204 15:42:23.517371   10206 kubeadm.go:310] 
	I1204 15:42:23.517428   10206 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1dn43k.o5d3nczgwbr8kvhs \
	I1204 15:42:23.517541   10206 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9ed783fc6ac587ac5303da44420d8c41896e6ac9083929196f4ee227216cf3a5 \
	I1204 15:42:23.517555   10206 kubeadm.go:310] 	--control-plane 
	I1204 15:42:23.517558   10206 kubeadm.go:310] 
	I1204 15:42:23.517612   10206 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1204 15:42:23.517617   10206 kubeadm.go:310] 
	I1204 15:42:23.517673   10206 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1dn43k.o5d3nczgwbr8kvhs \
	I1204 15:42:23.517733   10206 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9ed783fc6ac587ac5303da44420d8c41896e6ac9083929196f4ee227216cf3a5 
	I1204 15:42:23.517805   10206 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 15:42:23.517818   10206 cni.go:84] Creating CNI manager for ""
	I1204 15:42:23.517828   10206 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 15:42:23.524511   10206 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1204 15:42:23.527577   10206 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1204 15:42:23.530720   10206 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1204 15:42:23.536141   10206 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 15:42:23.536204   10206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 15:42:23.536205   10206 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-377000 minikube.k8s.io/updated_at=2024_12_04T15_42_23_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d minikube.k8s.io/name=stopped-upgrade-377000 minikube.k8s.io/primary=true
	I1204 15:42:23.540567   10206 ops.go:34] apiserver oom_adj: -16
	I1204 15:42:23.585745   10206 kubeadm.go:1113] duration metric: took 49.59775ms to wait for elevateKubeSystemPrivileges
	I1204 15:42:23.585761   10206 kubeadm.go:394] duration metric: took 4m12.726966667s to StartCluster
	I1204 15:42:23.585772   10206 settings.go:142] acquiring lock: {Name:mkdd110867a4c47f742f3f13d7f418d838150f4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:42:23.585873   10206 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/20045-6982/kubeconfig
	I1204 15:42:23.586285   10206 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-6982/kubeconfig: {Name:mk101d59bd39dad79cc42c692d70ed55e90c94da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:42:23.586508   10206 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:42:23.586521   10206 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1204 15:42:23.586556   10206 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-377000"
	I1204 15:42:23.586565   10206 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-377000"
	W1204 15:42:23.586568   10206 addons.go:243] addon storage-provisioner should already be in state true
	I1204 15:42:23.586578   10206 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-377000"
	I1204 15:42:23.586588   10206 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-377000"
	I1204 15:42:23.586580   10206 host.go:66] Checking if "stopped-upgrade-377000" exists ...
	I1204 15:42:23.586627   10206 config.go:182] Loaded profile config "stopped-upgrade-377000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1204 15:42:23.590570   10206 out.go:177] * Verifying Kubernetes components...
	I1204 15:42:23.591265   10206 kapi.go:59] client config for stopped-upgrade-377000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/stopped-upgrade-377000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/stopped-upgrade-377000/client.key", CAFile:"/Users/jenkins/minikube-integration/20045-6982/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10435f6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1204 15:42:23.594923   10206 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-377000"
	W1204 15:42:23.594929   10206 addons.go:243] addon default-storageclass should already be in state true
	I1204 15:42:23.594937   10206 host.go:66] Checking if "stopped-upgrade-377000" exists ...
	I1204 15:42:23.595487   10206 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1204 15:42:23.595492   10206 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1204 15:42:23.595498   10206 sshutil.go:53] new ssh client: &{IP:localhost Port:61799 SSHKeyPath:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/stopped-upgrade-377000/id_rsa Username:docker}
	I1204 15:42:23.598548   10206 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 15:42:23.601539   10206 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 15:42:23.605571   10206 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 15:42:23.605578   10206 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1204 15:42:23.605584   10206 sshutil.go:53] new ssh client: &{IP:localhost Port:61799 SSHKeyPath:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/stopped-upgrade-377000/id_rsa Username:docker}
	I1204 15:42:23.693462   10206 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 15:42:23.698778   10206 api_server.go:52] waiting for apiserver process to appear ...
	I1204 15:42:23.698836   10206 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 15:42:23.703210   10206 api_server.go:72] duration metric: took 116.691042ms to wait for apiserver process to appear ...
	I1204 15:42:23.703218   10206 api_server.go:88] waiting for apiserver healthz status ...
	I1204 15:42:23.703224   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:42:23.744942   10206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1204 15:42:23.765114   10206 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 15:42:24.095247   10206 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1204 15:42:24.095260   10206 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1204 15:42:28.704701   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:42:28.704758   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:42:33.705419   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:42:33.705439   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:42:38.705711   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:42:38.705738   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:42:43.706114   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:42:43.706173   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:42:48.706985   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:42:48.707019   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:42:53.707714   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:42:53.707783   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1204 15:42:54.098004   10206 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1204 15:42:54.103024   10206 out.go:177] * Enabled addons: storage-provisioner
	I1204 15:42:54.109883   10206 addons.go:510] duration metric: took 30.523082916s for enable addons: enabled=[storage-provisioner]
	I1204 15:42:58.708650   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:42:58.708681   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:43:03.709778   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:43:03.709801   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:43:08.711121   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:43:08.711153   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:43:13.712799   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:43:13.712846   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:43:18.715073   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:43:18.715100   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:43:23.716597   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:43:23.716884   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:43:23.758400   10206 logs.go:282] 1 containers: [7c4247944179]
	I1204 15:43:23.758507   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:43:23.774220   10206 logs.go:282] 1 containers: [fa33aed93a73]
	I1204 15:43:23.774312   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:43:23.789333   10206 logs.go:282] 2 containers: [872c1193c45e 3333718a6099]
	I1204 15:43:23.789422   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:43:23.799938   10206 logs.go:282] 1 containers: [3d43cf7548a9]
	I1204 15:43:23.800018   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:43:23.810014   10206 logs.go:282] 1 containers: [070b054a3a11]
	I1204 15:43:23.810090   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:43:23.820173   10206 logs.go:282] 1 containers: [7fd5007ee78f]
	I1204 15:43:23.820264   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:43:23.830198   10206 logs.go:282] 0 containers: []
	W1204 15:43:23.830211   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:43:23.830273   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:43:23.841194   10206 logs.go:282] 1 containers: [8ee1740c2cc4]
	I1204 15:43:23.841208   10206 logs.go:123] Gathering logs for kube-controller-manager [7fd5007ee78f] ...
	I1204 15:43:23.841214   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd5007ee78f"
	I1204 15:43:23.858943   10206 logs.go:123] Gathering logs for storage-provisioner [8ee1740c2cc4] ...
	I1204 15:43:23.858955   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee1740c2cc4"
	I1204 15:43:23.870021   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:43:23.870032   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:43:23.906042   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:43:23.906054   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:43:23.909892   10206 logs.go:123] Gathering logs for kube-apiserver [7c4247944179] ...
	I1204 15:43:23.909900   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c4247944179"
	I1204 15:43:23.924201   10206 logs.go:123] Gathering logs for etcd [fa33aed93a73] ...
	I1204 15:43:23.924213   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa33aed93a73"
	I1204 15:43:23.938058   10206 logs.go:123] Gathering logs for kube-scheduler [3d43cf7548a9] ...
	I1204 15:43:23.938068   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d43cf7548a9"
	I1204 15:43:23.952743   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:43:23.952754   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:43:23.966032   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:43:23.966042   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:43:24.007134   10206 logs.go:123] Gathering logs for coredns [872c1193c45e] ...
	I1204 15:43:24.007147   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 872c1193c45e"
	I1204 15:43:24.018567   10206 logs.go:123] Gathering logs for coredns [3333718a6099] ...
	I1204 15:43:24.018580   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3333718a6099"
	I1204 15:43:24.030157   10206 logs.go:123] Gathering logs for kube-proxy [070b054a3a11] ...
	I1204 15:43:24.030171   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 070b054a3a11"
	I1204 15:43:24.041665   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:43:24.041680   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:43:26.568097   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:43:31.570547   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:43:31.570857   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:43:31.600772   10206 logs.go:282] 1 containers: [7c4247944179]
	I1204 15:43:31.600905   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:43:31.618740   10206 logs.go:282] 1 containers: [fa33aed93a73]
	I1204 15:43:31.618851   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:43:31.632054   10206 logs.go:282] 2 containers: [872c1193c45e 3333718a6099]
	I1204 15:43:31.632147   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:43:31.643475   10206 logs.go:282] 1 containers: [3d43cf7548a9]
	I1204 15:43:31.643544   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:43:31.654275   10206 logs.go:282] 1 containers: [070b054a3a11]
	I1204 15:43:31.654364   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:43:31.664962   10206 logs.go:282] 1 containers: [7fd5007ee78f]
	I1204 15:43:31.665040   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:43:31.674898   10206 logs.go:282] 0 containers: []
	W1204 15:43:31.674911   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:43:31.674973   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:43:31.685054   10206 logs.go:282] 1 containers: [8ee1740c2cc4]
	I1204 15:43:31.685071   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:43:31.685077   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:43:31.720653   10206 logs.go:123] Gathering logs for kube-controller-manager [7fd5007ee78f] ...
	I1204 15:43:31.720661   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd5007ee78f"
	I1204 15:43:31.738079   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:43:31.738090   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:43:31.763554   10206 logs.go:123] Gathering logs for kube-proxy [070b054a3a11] ...
	I1204 15:43:31.763565   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 070b054a3a11"
	I1204 15:43:31.775366   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:43:31.775380   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:43:31.780343   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:43:31.780350   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:43:31.814908   10206 logs.go:123] Gathering logs for kube-apiserver [7c4247944179] ...
	I1204 15:43:31.814924   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c4247944179"
	I1204 15:43:31.829648   10206 logs.go:123] Gathering logs for etcd [fa33aed93a73] ...
	I1204 15:43:31.829660   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa33aed93a73"
	I1204 15:43:31.843351   10206 logs.go:123] Gathering logs for coredns [872c1193c45e] ...
	I1204 15:43:31.843364   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 872c1193c45e"
	I1204 15:43:31.855323   10206 logs.go:123] Gathering logs for coredns [3333718a6099] ...
	I1204 15:43:31.855335   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3333718a6099"
	I1204 15:43:31.870306   10206 logs.go:123] Gathering logs for kube-scheduler [3d43cf7548a9] ...
	I1204 15:43:31.870317   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d43cf7548a9"
	I1204 15:43:31.885057   10206 logs.go:123] Gathering logs for storage-provisioner [8ee1740c2cc4] ...
	I1204 15:43:31.885069   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee1740c2cc4"
	I1204 15:43:31.899665   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:43:31.899675   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:43:34.413175   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:43:39.416107   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:43:39.416432   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:43:39.441999   10206 logs.go:282] 1 containers: [7c4247944179]
	I1204 15:43:39.442128   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:43:39.459001   10206 logs.go:282] 1 containers: [fa33aed93a73]
	I1204 15:43:39.459102   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:43:39.472589   10206 logs.go:282] 2 containers: [872c1193c45e 3333718a6099]
	I1204 15:43:39.472667   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:43:39.483677   10206 logs.go:282] 1 containers: [3d43cf7548a9]
	I1204 15:43:39.483749   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:43:39.494093   10206 logs.go:282] 1 containers: [070b054a3a11]
	I1204 15:43:39.494167   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:43:39.504470   10206 logs.go:282] 1 containers: [7fd5007ee78f]
	I1204 15:43:39.504544   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:43:39.514613   10206 logs.go:282] 0 containers: []
	W1204 15:43:39.514624   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:43:39.514695   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:43:39.525077   10206 logs.go:282] 1 containers: [8ee1740c2cc4]
	I1204 15:43:39.525093   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:43:39.525098   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:43:39.564130   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:43:39.564142   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:43:39.599444   10206 logs.go:123] Gathering logs for kube-apiserver [7c4247944179] ...
	I1204 15:43:39.599457   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c4247944179"
	I1204 15:43:39.614142   10206 logs.go:123] Gathering logs for coredns [3333718a6099] ...
	I1204 15:43:39.614154   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3333718a6099"
	I1204 15:43:39.626258   10206 logs.go:123] Gathering logs for kube-scheduler [3d43cf7548a9] ...
	I1204 15:43:39.626271   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d43cf7548a9"
	I1204 15:43:39.641334   10206 logs.go:123] Gathering logs for kube-proxy [070b054a3a11] ...
	I1204 15:43:39.641347   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 070b054a3a11"
	I1204 15:43:39.660308   10206 logs.go:123] Gathering logs for kube-controller-manager [7fd5007ee78f] ...
	I1204 15:43:39.660318   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd5007ee78f"
	I1204 15:43:39.678369   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:43:39.678378   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:43:39.702919   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:43:39.702928   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:43:39.714524   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:43:39.714534   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:43:39.718837   10206 logs.go:123] Gathering logs for etcd [fa33aed93a73] ...
	I1204 15:43:39.718843   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa33aed93a73"
	I1204 15:43:39.732898   10206 logs.go:123] Gathering logs for coredns [872c1193c45e] ...
	I1204 15:43:39.732910   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 872c1193c45e"
	I1204 15:43:39.745010   10206 logs.go:123] Gathering logs for storage-provisioner [8ee1740c2cc4] ...
	I1204 15:43:39.745024   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee1740c2cc4"
	I1204 15:43:42.258949   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:43:47.261652   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:43:47.261951   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:43:47.289518   10206 logs.go:282] 1 containers: [7c4247944179]
	I1204 15:43:47.289668   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:43:47.310370   10206 logs.go:282] 1 containers: [fa33aed93a73]
	I1204 15:43:47.310462   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:43:47.323691   10206 logs.go:282] 2 containers: [872c1193c45e 3333718a6099]
	I1204 15:43:47.323775   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:43:47.335248   10206 logs.go:282] 1 containers: [3d43cf7548a9]
	I1204 15:43:47.335325   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:43:47.346284   10206 logs.go:282] 1 containers: [070b054a3a11]
	I1204 15:43:47.346358   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:43:47.357160   10206 logs.go:282] 1 containers: [7fd5007ee78f]
	I1204 15:43:47.357238   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:43:47.367595   10206 logs.go:282] 0 containers: []
	W1204 15:43:47.367605   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:43:47.367664   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:43:47.381841   10206 logs.go:282] 1 containers: [8ee1740c2cc4]
	I1204 15:43:47.381856   10206 logs.go:123] Gathering logs for kube-controller-manager [7fd5007ee78f] ...
	I1204 15:43:47.381861   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd5007ee78f"
	I1204 15:43:47.401316   10206 logs.go:123] Gathering logs for storage-provisioner [8ee1740c2cc4] ...
	I1204 15:43:47.401326   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee1740c2cc4"
	I1204 15:43:47.412727   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:43:47.412741   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:43:47.425293   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:43:47.425304   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:43:47.461324   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:43:47.461333   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:43:47.465382   10206 logs.go:123] Gathering logs for etcd [fa33aed93a73] ...
	I1204 15:43:47.465390   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa33aed93a73"
	I1204 15:43:47.479485   10206 logs.go:123] Gathering logs for coredns [3333718a6099] ...
	I1204 15:43:47.479496   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3333718a6099"
	I1204 15:43:47.491141   10206 logs.go:123] Gathering logs for kube-proxy [070b054a3a11] ...
	I1204 15:43:47.491153   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 070b054a3a11"
	I1204 15:43:47.502620   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:43:47.502632   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:43:47.525526   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:43:47.525533   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:43:47.560926   10206 logs.go:123] Gathering logs for kube-apiserver [7c4247944179] ...
	I1204 15:43:47.560937   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c4247944179"
	I1204 15:43:47.577857   10206 logs.go:123] Gathering logs for coredns [872c1193c45e] ...
	I1204 15:43:47.577865   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 872c1193c45e"
	I1204 15:43:47.590133   10206 logs.go:123] Gathering logs for kube-scheduler [3d43cf7548a9] ...
	I1204 15:43:47.590143   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d43cf7548a9"
	I1204 15:43:50.107830   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:43:55.110374   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:43:55.110908   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:43:55.148248   10206 logs.go:282] 1 containers: [7c4247944179]
	I1204 15:43:55.148403   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:43:55.169094   10206 logs.go:282] 1 containers: [fa33aed93a73]
	I1204 15:43:55.169220   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:43:55.185891   10206 logs.go:282] 2 containers: [872c1193c45e 3333718a6099]
	I1204 15:43:55.185960   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:43:55.197949   10206 logs.go:282] 1 containers: [3d43cf7548a9]
	I1204 15:43:55.198023   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:43:55.210195   10206 logs.go:282] 1 containers: [070b054a3a11]
	I1204 15:43:55.210280   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:43:55.220894   10206 logs.go:282] 1 containers: [7fd5007ee78f]
	I1204 15:43:55.220972   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:43:55.230935   10206 logs.go:282] 0 containers: []
	W1204 15:43:55.230946   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:43:55.231007   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:43:55.241275   10206 logs.go:282] 1 containers: [8ee1740c2cc4]
	I1204 15:43:55.241290   10206 logs.go:123] Gathering logs for coredns [872c1193c45e] ...
	I1204 15:43:55.241295   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 872c1193c45e"
	I1204 15:43:55.252618   10206 logs.go:123] Gathering logs for coredns [3333718a6099] ...
	I1204 15:43:55.252629   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3333718a6099"
	I1204 15:43:55.263835   10206 logs.go:123] Gathering logs for kube-controller-manager [7fd5007ee78f] ...
	I1204 15:43:55.263845   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd5007ee78f"
	I1204 15:43:55.280696   10206 logs.go:123] Gathering logs for storage-provisioner [8ee1740c2cc4] ...
	I1204 15:43:55.280708   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee1740c2cc4"
	I1204 15:43:55.291975   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:43:55.291987   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:43:55.317115   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:43:55.317122   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:43:55.321276   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:43:55.321284   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:43:55.359490   10206 logs.go:123] Gathering logs for kube-apiserver [7c4247944179] ...
	I1204 15:43:55.359503   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c4247944179"
	I1204 15:43:55.374727   10206 logs.go:123] Gathering logs for etcd [fa33aed93a73] ...
	I1204 15:43:55.374742   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa33aed93a73"
	I1204 15:43:55.389237   10206 logs.go:123] Gathering logs for kube-scheduler [3d43cf7548a9] ...
	I1204 15:43:55.389248   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d43cf7548a9"
	I1204 15:43:55.404085   10206 logs.go:123] Gathering logs for kube-proxy [070b054a3a11] ...
	I1204 15:43:55.404097   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 070b054a3a11"
	I1204 15:43:55.415605   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:43:55.415618   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:43:55.427150   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:43:55.427163   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:43:57.964888   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:44:02.967766   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:44:02.968311   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:44:03.018994   10206 logs.go:282] 1 containers: [7c4247944179]
	I1204 15:44:03.019129   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:44:03.037025   10206 logs.go:282] 1 containers: [fa33aed93a73]
	I1204 15:44:03.037124   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:44:03.050383   10206 logs.go:282] 2 containers: [872c1193c45e 3333718a6099]
	I1204 15:44:03.050465   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:44:03.061768   10206 logs.go:282] 1 containers: [3d43cf7548a9]
	I1204 15:44:03.061848   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:44:03.072073   10206 logs.go:282] 1 containers: [070b054a3a11]
	I1204 15:44:03.072151   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:44:03.082807   10206 logs.go:282] 1 containers: [7fd5007ee78f]
	I1204 15:44:03.082883   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:44:03.095097   10206 logs.go:282] 0 containers: []
	W1204 15:44:03.095115   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:44:03.095178   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:44:03.105467   10206 logs.go:282] 1 containers: [8ee1740c2cc4]
	I1204 15:44:03.105482   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:44:03.105487   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:44:03.143348   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:44:03.143358   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:44:03.147822   10206 logs.go:123] Gathering logs for kube-apiserver [7c4247944179] ...
	I1204 15:44:03.147830   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c4247944179"
	I1204 15:44:03.162314   10206 logs.go:123] Gathering logs for coredns [872c1193c45e] ...
	I1204 15:44:03.162326   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 872c1193c45e"
	I1204 15:44:03.173468   10206 logs.go:123] Gathering logs for kube-proxy [070b054a3a11] ...
	I1204 15:44:03.173479   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 070b054a3a11"
	I1204 15:44:03.184702   10206 logs.go:123] Gathering logs for kube-controller-manager [7fd5007ee78f] ...
	I1204 15:44:03.184716   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd5007ee78f"
	I1204 15:44:03.202038   10206 logs.go:123] Gathering logs for storage-provisioner [8ee1740c2cc4] ...
	I1204 15:44:03.202051   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee1740c2cc4"
	I1204 15:44:03.214289   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:44:03.214302   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:44:03.238650   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:44:03.238657   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:44:03.272308   10206 logs.go:123] Gathering logs for etcd [fa33aed93a73] ...
	I1204 15:44:03.272321   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa33aed93a73"
	I1204 15:44:03.289969   10206 logs.go:123] Gathering logs for coredns [3333718a6099] ...
	I1204 15:44:03.289978   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3333718a6099"
	I1204 15:44:03.301475   10206 logs.go:123] Gathering logs for kube-scheduler [3d43cf7548a9] ...
	I1204 15:44:03.301485   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d43cf7548a9"
	I1204 15:44:03.315620   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:44:03.315631   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:44:05.828226   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:44:10.830974   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:44:10.831579   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:44:10.871076   10206 logs.go:282] 1 containers: [7c4247944179]
	I1204 15:44:10.871234   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:44:10.894361   10206 logs.go:282] 1 containers: [fa33aed93a73]
	I1204 15:44:10.894491   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:44:10.911811   10206 logs.go:282] 2 containers: [872c1193c45e 3333718a6099]
	I1204 15:44:10.911901   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:44:10.925263   10206 logs.go:282] 1 containers: [3d43cf7548a9]
	I1204 15:44:10.925342   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:44:10.936067   10206 logs.go:282] 1 containers: [070b054a3a11]
	I1204 15:44:10.936144   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:44:10.946574   10206 logs.go:282] 1 containers: [7fd5007ee78f]
	I1204 15:44:10.946656   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:44:10.956644   10206 logs.go:282] 0 containers: []
	W1204 15:44:10.956654   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:44:10.956720   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:44:10.968796   10206 logs.go:282] 1 containers: [8ee1740c2cc4]
	I1204 15:44:10.968809   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:44:10.968815   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:44:11.006476   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:44:11.006486   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:44:11.010513   10206 logs.go:123] Gathering logs for coredns [872c1193c45e] ...
	I1204 15:44:11.010521   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 872c1193c45e"
	I1204 15:44:11.022452   10206 logs.go:123] Gathering logs for coredns [3333718a6099] ...
	I1204 15:44:11.022465   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3333718a6099"
	I1204 15:44:11.034228   10206 logs.go:123] Gathering logs for kube-proxy [070b054a3a11] ...
	I1204 15:44:11.034239   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 070b054a3a11"
	I1204 15:44:11.045888   10206 logs.go:123] Gathering logs for kube-controller-manager [7fd5007ee78f] ...
	I1204 15:44:11.045900   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd5007ee78f"
	I1204 15:44:11.063183   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:44:11.063196   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:44:11.088015   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:44:11.088023   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:44:11.126509   10206 logs.go:123] Gathering logs for kube-apiserver [7c4247944179] ...
	I1204 15:44:11.126524   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c4247944179"
	I1204 15:44:11.140822   10206 logs.go:123] Gathering logs for etcd [fa33aed93a73] ...
	I1204 15:44:11.140834   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa33aed93a73"
	I1204 15:44:11.154803   10206 logs.go:123] Gathering logs for kube-scheduler [3d43cf7548a9] ...
	I1204 15:44:11.154817   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d43cf7548a9"
	I1204 15:44:11.189583   10206 logs.go:123] Gathering logs for storage-provisioner [8ee1740c2cc4] ...
	I1204 15:44:11.189594   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee1740c2cc4"
	I1204 15:44:11.203986   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:44:11.203997   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:44:13.716736   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:44:18.718620   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:44:18.719045   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:44:18.754713   10206 logs.go:282] 1 containers: [7c4247944179]
	I1204 15:44:18.754865   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:44:18.775877   10206 logs.go:282] 1 containers: [fa33aed93a73]
	I1204 15:44:18.776009   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:44:18.791140   10206 logs.go:282] 2 containers: [872c1193c45e 3333718a6099]
	I1204 15:44:18.791223   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:44:18.803588   10206 logs.go:282] 1 containers: [3d43cf7548a9]
	I1204 15:44:18.803668   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:44:18.814609   10206 logs.go:282] 1 containers: [070b054a3a11]
	I1204 15:44:18.814686   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:44:18.824798   10206 logs.go:282] 1 containers: [7fd5007ee78f]
	I1204 15:44:18.824871   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:44:18.842684   10206 logs.go:282] 0 containers: []
	W1204 15:44:18.842696   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:44:18.842762   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:44:18.852832   10206 logs.go:282] 1 containers: [8ee1740c2cc4]
	I1204 15:44:18.852850   10206 logs.go:123] Gathering logs for kube-proxy [070b054a3a11] ...
	I1204 15:44:18.852855   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 070b054a3a11"
	I1204 15:44:18.864462   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:44:18.864475   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:44:18.869238   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:44:18.869246   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:44:18.908526   10206 logs.go:123] Gathering logs for coredns [872c1193c45e] ...
	I1204 15:44:18.908539   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 872c1193c45e"
	I1204 15:44:18.919720   10206 logs.go:123] Gathering logs for kube-scheduler [3d43cf7548a9] ...
	I1204 15:44:18.919731   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d43cf7548a9"
	I1204 15:44:18.935629   10206 logs.go:123] Gathering logs for kube-controller-manager [7fd5007ee78f] ...
	I1204 15:44:18.935640   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd5007ee78f"
	I1204 15:44:18.953543   10206 logs.go:123] Gathering logs for storage-provisioner [8ee1740c2cc4] ...
	I1204 15:44:18.953553   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee1740c2cc4"
	I1204 15:44:18.965013   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:44:18.965025   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:44:18.989573   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:44:18.989580   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:44:19.000842   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:44:19.000854   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:44:19.036973   10206 logs.go:123] Gathering logs for kube-apiserver [7c4247944179] ...
	I1204 15:44:19.036982   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c4247944179"
	I1204 15:44:19.051056   10206 logs.go:123] Gathering logs for etcd [fa33aed93a73] ...
	I1204 15:44:19.051069   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa33aed93a73"
	I1204 15:44:19.064861   10206 logs.go:123] Gathering logs for coredns [3333718a6099] ...
	I1204 15:44:19.064871   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3333718a6099"
	I1204 15:44:21.578258   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:44:26.580687   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:44:26.580896   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:44:26.597682   10206 logs.go:282] 1 containers: [7c4247944179]
	I1204 15:44:26.597775   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:44:26.611109   10206 logs.go:282] 1 containers: [fa33aed93a73]
	I1204 15:44:26.611187   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:44:26.622047   10206 logs.go:282] 2 containers: [872c1193c45e 3333718a6099]
	I1204 15:44:26.622120   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:44:26.632495   10206 logs.go:282] 1 containers: [3d43cf7548a9]
	I1204 15:44:26.632569   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:44:26.642938   10206 logs.go:282] 1 containers: [070b054a3a11]
	I1204 15:44:26.643009   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:44:26.653745   10206 logs.go:282] 1 containers: [7fd5007ee78f]
	I1204 15:44:26.653813   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:44:26.663533   10206 logs.go:282] 0 containers: []
	W1204 15:44:26.663545   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:44:26.663612   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:44:26.673939   10206 logs.go:282] 1 containers: [8ee1740c2cc4]
	I1204 15:44:26.673953   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:44:26.673958   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:44:26.709934   10206 logs.go:123] Gathering logs for etcd [fa33aed93a73] ...
	I1204 15:44:26.709945   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa33aed93a73"
	I1204 15:44:26.724225   10206 logs.go:123] Gathering logs for kube-proxy [070b054a3a11] ...
	I1204 15:44:26.724235   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 070b054a3a11"
	I1204 15:44:26.736392   10206 logs.go:123] Gathering logs for kube-controller-manager [7fd5007ee78f] ...
	I1204 15:44:26.736405   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd5007ee78f"
	I1204 15:44:26.754946   10206 logs.go:123] Gathering logs for storage-provisioner [8ee1740c2cc4] ...
	I1204 15:44:26.754959   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee1740c2cc4"
	I1204 15:44:26.766309   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:44:26.766320   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:44:26.804027   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:44:26.804035   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:44:26.808737   10206 logs.go:123] Gathering logs for kube-apiserver [7c4247944179] ...
	I1204 15:44:26.808743   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c4247944179"
	I1204 15:44:26.823150   10206 logs.go:123] Gathering logs for coredns [872c1193c45e] ...
	I1204 15:44:26.823160   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 872c1193c45e"
	I1204 15:44:26.834614   10206 logs.go:123] Gathering logs for coredns [3333718a6099] ...
	I1204 15:44:26.834626   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3333718a6099"
	I1204 15:44:26.847583   10206 logs.go:123] Gathering logs for kube-scheduler [3d43cf7548a9] ...
	I1204 15:44:26.847603   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d43cf7548a9"
	I1204 15:44:26.862089   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:44:26.862101   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:44:26.885209   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:44:26.885217   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:44:29.398701   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:44:34.400326   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:44:34.400859   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:44:34.444049   10206 logs.go:282] 1 containers: [7c4247944179]
	I1204 15:44:34.444200   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:44:34.467616   10206 logs.go:282] 1 containers: [fa33aed93a73]
	I1204 15:44:34.467723   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:44:34.481450   10206 logs.go:282] 2 containers: [872c1193c45e 3333718a6099]
	I1204 15:44:34.481537   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:44:34.493687   10206 logs.go:282] 1 containers: [3d43cf7548a9]
	I1204 15:44:34.493758   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:44:34.504290   10206 logs.go:282] 1 containers: [070b054a3a11]
	I1204 15:44:34.504362   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:44:34.518597   10206 logs.go:282] 1 containers: [7fd5007ee78f]
	I1204 15:44:34.518671   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:44:34.530859   10206 logs.go:282] 0 containers: []
	W1204 15:44:34.530872   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:44:34.530942   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:44:34.542162   10206 logs.go:282] 1 containers: [8ee1740c2cc4]
	I1204 15:44:34.542178   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:44:34.542183   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:44:34.567216   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:44:34.567226   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:44:34.602918   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:44:34.602925   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:44:34.637163   10206 logs.go:123] Gathering logs for kube-apiserver [7c4247944179] ...
	I1204 15:44:34.637177   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c4247944179"
	I1204 15:44:34.651954   10206 logs.go:123] Gathering logs for etcd [fa33aed93a73] ...
	I1204 15:44:34.651966   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa33aed93a73"
	I1204 15:44:34.665958   10206 logs.go:123] Gathering logs for coredns [872c1193c45e] ...
	I1204 15:44:34.665970   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 872c1193c45e"
	I1204 15:44:34.677774   10206 logs.go:123] Gathering logs for coredns [3333718a6099] ...
	I1204 15:44:34.677788   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3333718a6099"
	I1204 15:44:34.689694   10206 logs.go:123] Gathering logs for kube-proxy [070b054a3a11] ...
	I1204 15:44:34.689705   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 070b054a3a11"
	I1204 15:44:34.700741   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:44:34.700751   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:44:34.713169   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:44:34.713184   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:44:34.717615   10206 logs.go:123] Gathering logs for kube-scheduler [3d43cf7548a9] ...
	I1204 15:44:34.717622   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d43cf7548a9"
	I1204 15:44:34.733235   10206 logs.go:123] Gathering logs for kube-controller-manager [7fd5007ee78f] ...
	I1204 15:44:34.733246   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd5007ee78f"
	I1204 15:44:34.753702   10206 logs.go:123] Gathering logs for storage-provisioner [8ee1740c2cc4] ...
	I1204 15:44:34.753714   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee1740c2cc4"
	I1204 15:44:37.267088   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:44:42.269009   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:44:42.269290   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:44:42.300368   10206 logs.go:282] 1 containers: [7c4247944179]
	I1204 15:44:42.300504   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:44:42.322717   10206 logs.go:282] 1 containers: [fa33aed93a73]
	I1204 15:44:42.322859   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:44:42.343417   10206 logs.go:282] 4 containers: [6ce348ceaedd 464f9e7e003f 872c1193c45e 3333718a6099]
	I1204 15:44:42.343518   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:44:42.360256   10206 logs.go:282] 1 containers: [3d43cf7548a9]
	I1204 15:44:42.360339   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:44:42.374588   10206 logs.go:282] 1 containers: [070b054a3a11]
	I1204 15:44:42.374656   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:44:42.385575   10206 logs.go:282] 1 containers: [7fd5007ee78f]
	I1204 15:44:42.385651   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:44:42.396020   10206 logs.go:282] 0 containers: []
	W1204 15:44:42.396032   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:44:42.396097   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:44:42.406595   10206 logs.go:282] 1 containers: [8ee1740c2cc4]
	I1204 15:44:42.406615   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:44:42.406620   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:44:42.444211   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:44:42.444219   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:44:42.478858   10206 logs.go:123] Gathering logs for etcd [fa33aed93a73] ...
	I1204 15:44:42.478870   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa33aed93a73"
	I1204 15:44:42.492965   10206 logs.go:123] Gathering logs for kube-controller-manager [7fd5007ee78f] ...
	I1204 15:44:42.492974   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd5007ee78f"
	I1204 15:44:42.510760   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:44:42.510768   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:44:42.515021   10206 logs.go:123] Gathering logs for kube-scheduler [3d43cf7548a9] ...
	I1204 15:44:42.515029   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d43cf7548a9"
	I1204 15:44:42.529672   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:44:42.529685   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:44:42.553581   10206 logs.go:123] Gathering logs for coredns [6ce348ceaedd] ...
	I1204 15:44:42.553590   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce348ceaedd"
	I1204 15:44:42.564603   10206 logs.go:123] Gathering logs for coredns [464f9e7e003f] ...
	I1204 15:44:42.564613   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 464f9e7e003f"
	I1204 15:44:42.575974   10206 logs.go:123] Gathering logs for kube-apiserver [7c4247944179] ...
	I1204 15:44:42.575986   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c4247944179"
	I1204 15:44:42.590733   10206 logs.go:123] Gathering logs for coredns [872c1193c45e] ...
	I1204 15:44:42.590743   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 872c1193c45e"
	I1204 15:44:42.611894   10206 logs.go:123] Gathering logs for coredns [3333718a6099] ...
	I1204 15:44:42.611905   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3333718a6099"
	I1204 15:44:42.623690   10206 logs.go:123] Gathering logs for kube-proxy [070b054a3a11] ...
	I1204 15:44:42.623700   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 070b054a3a11"
	I1204 15:44:42.637156   10206 logs.go:123] Gathering logs for storage-provisioner [8ee1740c2cc4] ...
	I1204 15:44:42.637169   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee1740c2cc4"
	I1204 15:44:42.649054   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:44:42.649065   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:44:45.162587   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:44:50.164946   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:44:50.165550   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:44:50.206721   10206 logs.go:282] 1 containers: [7c4247944179]
	I1204 15:44:50.206890   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:44:50.232810   10206 logs.go:282] 1 containers: [fa33aed93a73]
	I1204 15:44:50.232940   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:44:50.248814   10206 logs.go:282] 4 containers: [6ce348ceaedd 464f9e7e003f 872c1193c45e 3333718a6099]
	I1204 15:44:50.248896   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:44:50.264259   10206 logs.go:282] 1 containers: [3d43cf7548a9]
	I1204 15:44:50.264339   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:44:50.274527   10206 logs.go:282] 1 containers: [070b054a3a11]
	I1204 15:44:50.274607   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:44:50.284815   10206 logs.go:282] 1 containers: [7fd5007ee78f]
	I1204 15:44:50.284892   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:44:50.296625   10206 logs.go:282] 0 containers: []
	W1204 15:44:50.296635   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:44:50.296693   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:44:50.307356   10206 logs.go:282] 1 containers: [8ee1740c2cc4]
	I1204 15:44:50.307372   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:44:50.307377   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:44:50.345736   10206 logs.go:123] Gathering logs for coredns [464f9e7e003f] ...
	I1204 15:44:50.345751   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 464f9e7e003f"
	I1204 15:44:50.357726   10206 logs.go:123] Gathering logs for coredns [6ce348ceaedd] ...
	I1204 15:44:50.357740   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce348ceaedd"
	I1204 15:44:50.370461   10206 logs.go:123] Gathering logs for coredns [3333718a6099] ...
	I1204 15:44:50.370474   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3333718a6099"
	I1204 15:44:50.382611   10206 logs.go:123] Gathering logs for kube-proxy [070b054a3a11] ...
	I1204 15:44:50.382623   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 070b054a3a11"
	I1204 15:44:50.394663   10206 logs.go:123] Gathering logs for storage-provisioner [8ee1740c2cc4] ...
	I1204 15:44:50.394676   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee1740c2cc4"
	I1204 15:44:50.415045   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:44:50.415055   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:44:50.426910   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:44:50.426923   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:44:50.463276   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:44:50.463287   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:44:50.467295   10206 logs.go:123] Gathering logs for coredns [872c1193c45e] ...
	I1204 15:44:50.467300   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 872c1193c45e"
	I1204 15:44:50.479669   10206 logs.go:123] Gathering logs for kube-scheduler [3d43cf7548a9] ...
	I1204 15:44:50.479682   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d43cf7548a9"
	I1204 15:44:50.493936   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:44:50.493947   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:44:50.518793   10206 logs.go:123] Gathering logs for kube-apiserver [7c4247944179] ...
	I1204 15:44:50.518805   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c4247944179"
	I1204 15:44:50.534762   10206 logs.go:123] Gathering logs for etcd [fa33aed93a73] ...
	I1204 15:44:50.534775   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa33aed93a73"
	I1204 15:44:50.551579   10206 logs.go:123] Gathering logs for kube-controller-manager [7fd5007ee78f] ...
	I1204 15:44:50.551591   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd5007ee78f"
	I1204 15:44:53.070679   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:44:58.072934   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:44:58.073049   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:44:58.086311   10206 logs.go:282] 1 containers: [7c4247944179]
	I1204 15:44:58.086398   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:44:58.099390   10206 logs.go:282] 1 containers: [fa33aed93a73]
	I1204 15:44:58.099484   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:44:58.111633   10206 logs.go:282] 4 containers: [6ce348ceaedd 464f9e7e003f 872c1193c45e 3333718a6099]
	I1204 15:44:58.111729   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:44:58.131068   10206 logs.go:282] 1 containers: [3d43cf7548a9]
	I1204 15:44:58.131161   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:44:58.143930   10206 logs.go:282] 1 containers: [070b054a3a11]
	I1204 15:44:58.144006   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:44:58.154754   10206 logs.go:282] 1 containers: [7fd5007ee78f]
	I1204 15:44:58.154834   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:44:58.165257   10206 logs.go:282] 0 containers: []
	W1204 15:44:58.165268   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:44:58.165336   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:44:58.176811   10206 logs.go:282] 1 containers: [8ee1740c2cc4]
	I1204 15:44:58.176829   10206 logs.go:123] Gathering logs for kube-apiserver [7c4247944179] ...
	I1204 15:44:58.176835   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c4247944179"
	I1204 15:44:58.191411   10206 logs.go:123] Gathering logs for coredns [464f9e7e003f] ...
	I1204 15:44:58.191421   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 464f9e7e003f"
	I1204 15:44:58.207421   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:44:58.207432   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:44:58.231977   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:44:58.231985   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:44:58.248215   10206 logs.go:123] Gathering logs for coredns [6ce348ceaedd] ...
	I1204 15:44:58.248228   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce348ceaedd"
	I1204 15:44:58.259413   10206 logs.go:123] Gathering logs for coredns [872c1193c45e] ...
	I1204 15:44:58.259426   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 872c1193c45e"
	I1204 15:44:58.271811   10206 logs.go:123] Gathering logs for kube-proxy [070b054a3a11] ...
	I1204 15:44:58.271819   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 070b054a3a11"
	I1204 15:44:58.290975   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:44:58.290984   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:44:58.328406   10206 logs.go:123] Gathering logs for etcd [fa33aed93a73] ...
	I1204 15:44:58.328415   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa33aed93a73"
	I1204 15:44:58.342701   10206 logs.go:123] Gathering logs for coredns [3333718a6099] ...
	I1204 15:44:58.342714   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3333718a6099"
	I1204 15:44:58.358466   10206 logs.go:123] Gathering logs for kube-scheduler [3d43cf7548a9] ...
	I1204 15:44:58.358476   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d43cf7548a9"
	I1204 15:44:58.383003   10206 logs.go:123] Gathering logs for storage-provisioner [8ee1740c2cc4] ...
	I1204 15:44:58.383018   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee1740c2cc4"
	I1204 15:44:58.395120   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:44:58.395130   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:44:58.399430   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:44:58.399440   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:44:58.434372   10206 logs.go:123] Gathering logs for kube-controller-manager [7fd5007ee78f] ...
	I1204 15:44:58.434384   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd5007ee78f"
	I1204 15:45:00.959903   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:45:05.962392   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:45:05.962649   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:45:05.993730   10206 logs.go:282] 1 containers: [7c4247944179]
	I1204 15:45:05.993869   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:45:06.011442   10206 logs.go:282] 1 containers: [fa33aed93a73]
	I1204 15:45:06.011548   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:45:06.024788   10206 logs.go:282] 4 containers: [6ce348ceaedd 464f9e7e003f 872c1193c45e 3333718a6099]
	I1204 15:45:06.024869   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:45:06.039122   10206 logs.go:282] 1 containers: [3d43cf7548a9]
	I1204 15:45:06.039193   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:45:06.049173   10206 logs.go:282] 1 containers: [070b054a3a11]
	I1204 15:45:06.049252   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:45:06.059958   10206 logs.go:282] 1 containers: [7fd5007ee78f]
	I1204 15:45:06.060031   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:45:06.069737   10206 logs.go:282] 0 containers: []
	W1204 15:45:06.069749   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:45:06.069811   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:45:06.080199   10206 logs.go:282] 1 containers: [8ee1740c2cc4]
	I1204 15:45:06.080213   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:45:06.080221   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:45:06.085040   10206 logs.go:123] Gathering logs for coredns [872c1193c45e] ...
	I1204 15:45:06.085047   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 872c1193c45e"
	I1204 15:45:06.096804   10206 logs.go:123] Gathering logs for kube-controller-manager [7fd5007ee78f] ...
	I1204 15:45:06.096814   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd5007ee78f"
	I1204 15:45:06.113984   10206 logs.go:123] Gathering logs for etcd [fa33aed93a73] ...
	I1204 15:45:06.113994   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa33aed93a73"
	I1204 15:45:06.128454   10206 logs.go:123] Gathering logs for coredns [3333718a6099] ...
	I1204 15:45:06.128466   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3333718a6099"
	I1204 15:45:06.140055   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:45:06.140067   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:45:06.163978   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:45:06.163988   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:45:06.199765   10206 logs.go:123] Gathering logs for kube-apiserver [7c4247944179] ...
	I1204 15:45:06.199774   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c4247944179"
	I1204 15:45:06.213893   10206 logs.go:123] Gathering logs for coredns [6ce348ceaedd] ...
	I1204 15:45:06.213904   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce348ceaedd"
	I1204 15:45:06.225824   10206 logs.go:123] Gathering logs for storage-provisioner [8ee1740c2cc4] ...
	I1204 15:45:06.225836   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee1740c2cc4"
	I1204 15:45:06.237609   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:45:06.237619   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:45:06.249141   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:45:06.249155   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:45:06.283659   10206 logs.go:123] Gathering logs for coredns [464f9e7e003f] ...
	I1204 15:45:06.283670   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 464f9e7e003f"
	I1204 15:45:06.295416   10206 logs.go:123] Gathering logs for kube-scheduler [3d43cf7548a9] ...
	I1204 15:45:06.295429   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d43cf7548a9"
	I1204 15:45:06.309754   10206 logs.go:123] Gathering logs for kube-proxy [070b054a3a11] ...
	I1204 15:45:06.309766   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 070b054a3a11"
	I1204 15:45:08.824106   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:45:13.826420   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:45:13.826877   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:45:13.858682   10206 logs.go:282] 1 containers: [7c4247944179]
	I1204 15:45:13.858827   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:45:13.877560   10206 logs.go:282] 1 containers: [fa33aed93a73]
	I1204 15:45:13.877665   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:45:13.891320   10206 logs.go:282] 4 containers: [6ce348ceaedd 464f9e7e003f 872c1193c45e 3333718a6099]
	I1204 15:45:13.891394   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:45:13.902775   10206 logs.go:282] 1 containers: [3d43cf7548a9]
	I1204 15:45:13.902852   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:45:13.919234   10206 logs.go:282] 1 containers: [070b054a3a11]
	I1204 15:45:13.919305   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:45:13.931729   10206 logs.go:282] 1 containers: [7fd5007ee78f]
	I1204 15:45:13.931801   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:45:13.942945   10206 logs.go:282] 0 containers: []
	W1204 15:45:13.942954   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:45:13.943013   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:45:13.953928   10206 logs.go:282] 1 containers: [8ee1740c2cc4]
	I1204 15:45:13.953944   10206 logs.go:123] Gathering logs for storage-provisioner [8ee1740c2cc4] ...
	I1204 15:45:13.953949   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee1740c2cc4"
	I1204 15:45:13.966221   10206 logs.go:123] Gathering logs for coredns [464f9e7e003f] ...
	I1204 15:45:13.966234   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 464f9e7e003f"
	I1204 15:45:13.977832   10206 logs.go:123] Gathering logs for etcd [fa33aed93a73] ...
	I1204 15:45:13.977842   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa33aed93a73"
	I1204 15:45:13.991762   10206 logs.go:123] Gathering logs for coredns [6ce348ceaedd] ...
	I1204 15:45:13.991775   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce348ceaedd"
	I1204 15:45:14.003611   10206 logs.go:123] Gathering logs for kube-controller-manager [7fd5007ee78f] ...
	I1204 15:45:14.003625   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd5007ee78f"
	I1204 15:45:14.020885   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:45:14.020893   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:45:14.047561   10206 logs.go:123] Gathering logs for kube-apiserver [7c4247944179] ...
	I1204 15:45:14.047572   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c4247944179"
	I1204 15:45:14.061868   10206 logs.go:123] Gathering logs for coredns [872c1193c45e] ...
	I1204 15:45:14.061878   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 872c1193c45e"
	I1204 15:45:14.073608   10206 logs.go:123] Gathering logs for coredns [3333718a6099] ...
	I1204 15:45:14.073618   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3333718a6099"
	I1204 15:45:14.085190   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:45:14.085201   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:45:14.126357   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:45:14.126371   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:45:14.130820   10206 logs.go:123] Gathering logs for kube-scheduler [3d43cf7548a9] ...
	I1204 15:45:14.130828   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d43cf7548a9"
	I1204 15:45:14.145318   10206 logs.go:123] Gathering logs for kube-proxy [070b054a3a11] ...
	I1204 15:45:14.145329   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 070b054a3a11"
	I1204 15:45:14.157356   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:45:14.157365   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:45:14.169152   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:45:14.169166   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:45:16.708535   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:45:21.709295   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:45:21.709390   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:45:21.726188   10206 logs.go:282] 1 containers: [7c4247944179]
	I1204 15:45:21.726253   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:45:21.737308   10206 logs.go:282] 1 containers: [fa33aed93a73]
	I1204 15:45:21.737380   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:45:21.754265   10206 logs.go:282] 4 containers: [6ce348ceaedd 464f9e7e003f 872c1193c45e 3333718a6099]
	I1204 15:45:21.754347   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:45:21.766490   10206 logs.go:282] 1 containers: [3d43cf7548a9]
	I1204 15:45:21.766557   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:45:21.782424   10206 logs.go:282] 1 containers: [070b054a3a11]
	I1204 15:45:21.782488   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:45:21.793875   10206 logs.go:282] 1 containers: [7fd5007ee78f]
	I1204 15:45:21.793971   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:45:21.805552   10206 logs.go:282] 0 containers: []
	W1204 15:45:21.805564   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:45:21.805635   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:45:21.817325   10206 logs.go:282] 1 containers: [8ee1740c2cc4]
	I1204 15:45:21.817345   10206 logs.go:123] Gathering logs for kube-apiserver [7c4247944179] ...
	I1204 15:45:21.817352   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c4247944179"
	I1204 15:45:21.833153   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:45:21.833167   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:45:21.859009   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:45:21.859023   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:45:21.872707   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:45:21.872719   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:45:21.910524   10206 logs.go:123] Gathering logs for coredns [3333718a6099] ...
	I1204 15:45:21.910539   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3333718a6099"
	I1204 15:45:21.923747   10206 logs.go:123] Gathering logs for kube-scheduler [3d43cf7548a9] ...
	I1204 15:45:21.923759   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d43cf7548a9"
	I1204 15:45:21.938885   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:45:21.938895   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:45:21.943506   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:45:21.943515   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:45:21.985327   10206 logs.go:123] Gathering logs for etcd [fa33aed93a73] ...
	I1204 15:45:21.985340   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa33aed93a73"
	I1204 15:45:22.000011   10206 logs.go:123] Gathering logs for coredns [464f9e7e003f] ...
	I1204 15:45:22.000023   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 464f9e7e003f"
	I1204 15:45:22.013698   10206 logs.go:123] Gathering logs for coredns [6ce348ceaedd] ...
	I1204 15:45:22.013708   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce348ceaedd"
	I1204 15:45:22.027265   10206 logs.go:123] Gathering logs for coredns [872c1193c45e] ...
	I1204 15:45:22.027277   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 872c1193c45e"
	I1204 15:45:22.042832   10206 logs.go:123] Gathering logs for kube-proxy [070b054a3a11] ...
	I1204 15:45:22.042845   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 070b054a3a11"
	I1204 15:45:22.057112   10206 logs.go:123] Gathering logs for kube-controller-manager [7fd5007ee78f] ...
	I1204 15:45:22.057122   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd5007ee78f"
	I1204 15:45:22.076605   10206 logs.go:123] Gathering logs for storage-provisioner [8ee1740c2cc4] ...
	I1204 15:45:22.076614   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee1740c2cc4"
	I1204 15:45:24.590405   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:45:29.593076   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:45:29.593535   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:45:29.625766   10206 logs.go:282] 1 containers: [7c4247944179]
	I1204 15:45:29.625913   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:45:29.645639   10206 logs.go:282] 1 containers: [fa33aed93a73]
	I1204 15:45:29.645766   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:45:29.660566   10206 logs.go:282] 4 containers: [6ce348ceaedd 464f9e7e003f 872c1193c45e 3333718a6099]
	I1204 15:45:29.660652   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:45:29.673014   10206 logs.go:282] 1 containers: [3d43cf7548a9]
	I1204 15:45:29.673083   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:45:29.683679   10206 logs.go:282] 1 containers: [070b054a3a11]
	I1204 15:45:29.683756   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:45:29.694289   10206 logs.go:282] 1 containers: [7fd5007ee78f]
	I1204 15:45:29.694360   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:45:29.707334   10206 logs.go:282] 0 containers: []
	W1204 15:45:29.707344   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:45:29.707405   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:45:29.717861   10206 logs.go:282] 1 containers: [8ee1740c2cc4]
	I1204 15:45:29.717875   10206 logs.go:123] Gathering logs for etcd [fa33aed93a73] ...
	I1204 15:45:29.717880   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa33aed93a73"
	I1204 15:45:29.735173   10206 logs.go:123] Gathering logs for storage-provisioner [8ee1740c2cc4] ...
	I1204 15:45:29.735185   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee1740c2cc4"
	I1204 15:45:29.750245   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:45:29.750254   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:45:29.788921   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:45:29.788934   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:45:29.793812   10206 logs.go:123] Gathering logs for kube-controller-manager [7fd5007ee78f] ...
	I1204 15:45:29.793821   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd5007ee78f"
	I1204 15:45:29.812196   10206 logs.go:123] Gathering logs for kube-apiserver [7c4247944179] ...
	I1204 15:45:29.812207   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c4247944179"
	I1204 15:45:29.827785   10206 logs.go:123] Gathering logs for kube-proxy [070b054a3a11] ...
	I1204 15:45:29.827796   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 070b054a3a11"
	I1204 15:45:29.839338   10206 logs.go:123] Gathering logs for coredns [464f9e7e003f] ...
	I1204 15:45:29.839349   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 464f9e7e003f"
	I1204 15:45:29.851429   10206 logs.go:123] Gathering logs for kube-scheduler [3d43cf7548a9] ...
	I1204 15:45:29.851440   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d43cf7548a9"
	I1204 15:45:29.865533   10206 logs.go:123] Gathering logs for coredns [872c1193c45e] ...
	I1204 15:45:29.865544   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 872c1193c45e"
	I1204 15:45:29.877508   10206 logs.go:123] Gathering logs for coredns [3333718a6099] ...
	I1204 15:45:29.877519   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3333718a6099"
	I1204 15:45:29.892223   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:45:29.892236   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:45:29.916976   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:45:29.916984   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:45:29.928585   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:45:29.928597   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:45:29.964986   10206 logs.go:123] Gathering logs for coredns [6ce348ceaedd] ...
	I1204 15:45:29.964998   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce348ceaedd"
	I1204 15:45:32.482791   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:45:37.485714   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:45:37.485928   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:45:37.513262   10206 logs.go:282] 1 containers: [7c4247944179]
	I1204 15:45:37.513390   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:45:37.531486   10206 logs.go:282] 1 containers: [fa33aed93a73]
	I1204 15:45:37.531576   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:45:37.543933   10206 logs.go:282] 4 containers: [6ce348ceaedd 464f9e7e003f 872c1193c45e 3333718a6099]
	I1204 15:45:37.544038   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:45:37.555572   10206 logs.go:282] 1 containers: [3d43cf7548a9]
	I1204 15:45:37.555648   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:45:37.573697   10206 logs.go:282] 1 containers: [070b054a3a11]
	I1204 15:45:37.573775   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:45:37.587757   10206 logs.go:282] 1 containers: [7fd5007ee78f]
	I1204 15:45:37.587830   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:45:37.599219   10206 logs.go:282] 0 containers: []
	W1204 15:45:37.599232   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:45:37.599317   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:45:37.611949   10206 logs.go:282] 1 containers: [8ee1740c2cc4]
	I1204 15:45:37.611969   10206 logs.go:123] Gathering logs for kube-proxy [070b054a3a11] ...
	I1204 15:45:37.611975   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 070b054a3a11"
	I1204 15:45:37.624209   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:45:37.624222   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:45:37.660563   10206 logs.go:123] Gathering logs for coredns [6ce348ceaedd] ...
	I1204 15:45:37.660576   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce348ceaedd"
	I1204 15:45:37.677627   10206 logs.go:123] Gathering logs for storage-provisioner [8ee1740c2cc4] ...
	I1204 15:45:37.677640   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee1740c2cc4"
	I1204 15:45:37.689579   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:45:37.689592   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:45:37.713293   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:45:37.713300   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:45:37.725203   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:45:37.725216   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:45:37.762149   10206 logs.go:123] Gathering logs for etcd [fa33aed93a73] ...
	I1204 15:45:37.762156   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa33aed93a73"
	I1204 15:45:37.775873   10206 logs.go:123] Gathering logs for kube-scheduler [3d43cf7548a9] ...
	I1204 15:45:37.775884   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d43cf7548a9"
	I1204 15:45:37.790496   10206 logs.go:123] Gathering logs for coredns [464f9e7e003f] ...
	I1204 15:45:37.790508   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 464f9e7e003f"
	I1204 15:45:37.803737   10206 logs.go:123] Gathering logs for coredns [3333718a6099] ...
	I1204 15:45:37.803750   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3333718a6099"
	I1204 15:45:37.815040   10206 logs.go:123] Gathering logs for coredns [872c1193c45e] ...
	I1204 15:45:37.815052   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 872c1193c45e"
	I1204 15:45:37.826472   10206 logs.go:123] Gathering logs for kube-controller-manager [7fd5007ee78f] ...
	I1204 15:45:37.826481   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd5007ee78f"
	I1204 15:45:37.843825   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:45:37.843837   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:45:37.848003   10206 logs.go:123] Gathering logs for kube-apiserver [7c4247944179] ...
	I1204 15:45:37.848009   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c4247944179"
	I1204 15:45:40.373008   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:45:45.375826   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:45:45.375898   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:45:45.389115   10206 logs.go:282] 1 containers: [7c4247944179]
	I1204 15:45:45.389189   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:45:45.403230   10206 logs.go:282] 1 containers: [fa33aed93a73]
	I1204 15:45:45.403292   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:45:45.415215   10206 logs.go:282] 4 containers: [6ce348ceaedd 464f9e7e003f 872c1193c45e 3333718a6099]
	I1204 15:45:45.415281   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:45:45.426336   10206 logs.go:282] 1 containers: [3d43cf7548a9]
	I1204 15:45:45.426407   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:45:45.438315   10206 logs.go:282] 1 containers: [070b054a3a11]
	I1204 15:45:45.438406   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:45:45.454497   10206 logs.go:282] 1 containers: [7fd5007ee78f]
	I1204 15:45:45.454579   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:45:45.465703   10206 logs.go:282] 0 containers: []
	W1204 15:45:45.465713   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:45:45.465786   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:45:45.483819   10206 logs.go:282] 1 containers: [8ee1740c2cc4]
	I1204 15:45:45.483843   10206 logs.go:123] Gathering logs for coredns [6ce348ceaedd] ...
	I1204 15:45:45.483849   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce348ceaedd"
	I1204 15:45:45.500110   10206 logs.go:123] Gathering logs for coredns [872c1193c45e] ...
	I1204 15:45:45.500119   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 872c1193c45e"
	I1204 15:45:45.515959   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:45:45.515974   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:45:45.554789   10206 logs.go:123] Gathering logs for etcd [fa33aed93a73] ...
	I1204 15:45:45.554800   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa33aed93a73"
	I1204 15:45:45.570552   10206 logs.go:123] Gathering logs for kube-proxy [070b054a3a11] ...
	I1204 15:45:45.570563   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 070b054a3a11"
	I1204 15:45:45.583310   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:45:45.583323   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:45:45.607893   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:45:45.607909   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:45:45.612886   10206 logs.go:123] Gathering logs for kube-apiserver [7c4247944179] ...
	I1204 15:45:45.612894   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c4247944179"
	I1204 15:45:45.628269   10206 logs.go:123] Gathering logs for kube-controller-manager [7fd5007ee78f] ...
	I1204 15:45:45.628282   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd5007ee78f"
	I1204 15:45:45.646805   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:45:45.646816   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:45:45.659754   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:45:45.659767   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:45:45.697292   10206 logs.go:123] Gathering logs for kube-scheduler [3d43cf7548a9] ...
	I1204 15:45:45.697307   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d43cf7548a9"
	I1204 15:45:45.712889   10206 logs.go:123] Gathering logs for storage-provisioner [8ee1740c2cc4] ...
	I1204 15:45:45.712903   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee1740c2cc4"
	I1204 15:45:45.729090   10206 logs.go:123] Gathering logs for coredns [464f9e7e003f] ...
	I1204 15:45:45.729103   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 464f9e7e003f"
	I1204 15:45:45.742673   10206 logs.go:123] Gathering logs for coredns [3333718a6099] ...
	I1204 15:45:45.742683   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3333718a6099"
	I1204 15:45:48.257982   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:45:53.260332   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:45:53.260620   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:45:53.287362   10206 logs.go:282] 1 containers: [7c4247944179]
	I1204 15:45:53.287498   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:45:53.305344   10206 logs.go:282] 1 containers: [fa33aed93a73]
	I1204 15:45:53.305436   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:45:53.318294   10206 logs.go:282] 4 containers: [6ce348ceaedd 464f9e7e003f 872c1193c45e 3333718a6099]
	I1204 15:45:53.318383   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:45:53.330004   10206 logs.go:282] 1 containers: [3d43cf7548a9]
	I1204 15:45:53.330083   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:45:53.340655   10206 logs.go:282] 1 containers: [070b054a3a11]
	I1204 15:45:53.340730   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:45:53.351151   10206 logs.go:282] 1 containers: [7fd5007ee78f]
	I1204 15:45:53.351232   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:45:53.361494   10206 logs.go:282] 0 containers: []
	W1204 15:45:53.361504   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:45:53.361580   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:45:53.372515   10206 logs.go:282] 1 containers: [8ee1740c2cc4]
	I1204 15:45:53.372533   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:45:53.372538   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:45:53.377435   10206 logs.go:123] Gathering logs for etcd [fa33aed93a73] ...
	I1204 15:45:53.377442   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa33aed93a73"
	I1204 15:45:53.393271   10206 logs.go:123] Gathering logs for coredns [3333718a6099] ...
	I1204 15:45:53.393283   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3333718a6099"
	I1204 15:45:53.405278   10206 logs.go:123] Gathering logs for kube-scheduler [3d43cf7548a9] ...
	I1204 15:45:53.405288   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d43cf7548a9"
	I1204 15:45:53.419525   10206 logs.go:123] Gathering logs for kube-proxy [070b054a3a11] ...
	I1204 15:45:53.419535   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 070b054a3a11"
	I1204 15:45:53.431486   10206 logs.go:123] Gathering logs for storage-provisioner [8ee1740c2cc4] ...
	I1204 15:45:53.431497   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee1740c2cc4"
	I1204 15:45:53.442783   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:45:53.442795   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:45:53.465720   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:45:53.465728   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:45:53.500893   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:45:53.500906   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:45:53.537658   10206 logs.go:123] Gathering logs for coredns [872c1193c45e] ...
	I1204 15:45:53.537669   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 872c1193c45e"
	I1204 15:45:53.549520   10206 logs.go:123] Gathering logs for kube-controller-manager [7fd5007ee78f] ...
	I1204 15:45:53.549531   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd5007ee78f"
	I1204 15:45:53.567195   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:45:53.567208   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:45:53.578694   10206 logs.go:123] Gathering logs for kube-apiserver [7c4247944179] ...
	I1204 15:45:53.578707   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c4247944179"
	I1204 15:45:53.593087   10206 logs.go:123] Gathering logs for coredns [6ce348ceaedd] ...
	I1204 15:45:53.593100   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce348ceaedd"
	I1204 15:45:53.605142   10206 logs.go:123] Gathering logs for coredns [464f9e7e003f] ...
	I1204 15:45:53.605154   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 464f9e7e003f"
	I1204 15:45:56.118578   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:46:01.121464   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:46:01.121968   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:46:01.154477   10206 logs.go:282] 1 containers: [7c4247944179]
	I1204 15:46:01.154611   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:46:01.174447   10206 logs.go:282] 1 containers: [fa33aed93a73]
	I1204 15:46:01.174556   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:46:01.189216   10206 logs.go:282] 4 containers: [6ce348ceaedd 464f9e7e003f 872c1193c45e 3333718a6099]
	I1204 15:46:01.189300   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:46:01.211686   10206 logs.go:282] 1 containers: [3d43cf7548a9]
	I1204 15:46:01.211765   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:46:01.222110   10206 logs.go:282] 1 containers: [070b054a3a11]
	I1204 15:46:01.222188   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:46:01.233018   10206 logs.go:282] 1 containers: [7fd5007ee78f]
	I1204 15:46:01.233082   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:46:01.247575   10206 logs.go:282] 0 containers: []
	W1204 15:46:01.247588   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:46:01.247653   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:46:01.258204   10206 logs.go:282] 1 containers: [8ee1740c2cc4]
	I1204 15:46:01.258224   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:46:01.258228   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:46:01.292596   10206 logs.go:123] Gathering logs for coredns [464f9e7e003f] ...
	I1204 15:46:01.292609   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 464f9e7e003f"
	I1204 15:46:01.305307   10206 logs.go:123] Gathering logs for coredns [872c1193c45e] ...
	I1204 15:46:01.305318   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 872c1193c45e"
	I1204 15:46:01.322448   10206 logs.go:123] Gathering logs for kube-proxy [070b054a3a11] ...
	I1204 15:46:01.322460   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 070b054a3a11"
	I1204 15:46:01.334434   10206 logs.go:123] Gathering logs for kube-controller-manager [7fd5007ee78f] ...
	I1204 15:46:01.334446   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd5007ee78f"
	I1204 15:46:01.356113   10206 logs.go:123] Gathering logs for etcd [fa33aed93a73] ...
	I1204 15:46:01.356130   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa33aed93a73"
	I1204 15:46:01.383516   10206 logs.go:123] Gathering logs for coredns [6ce348ceaedd] ...
	I1204 15:46:01.383531   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce348ceaedd"
	I1204 15:46:01.397512   10206 logs.go:123] Gathering logs for coredns [3333718a6099] ...
	I1204 15:46:01.397527   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3333718a6099"
	I1204 15:46:01.409632   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:46:01.409646   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:46:01.434330   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:46:01.434341   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:46:01.446054   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:46:01.446069   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:46:01.483691   10206 logs.go:123] Gathering logs for kube-apiserver [7c4247944179] ...
	I1204 15:46:01.483699   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c4247944179"
	I1204 15:46:01.497578   10206 logs.go:123] Gathering logs for kube-scheduler [3d43cf7548a9] ...
	I1204 15:46:01.497591   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d43cf7548a9"
	I1204 15:46:01.512071   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:46:01.512081   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:46:01.516332   10206 logs.go:123] Gathering logs for storage-provisioner [8ee1740c2cc4] ...
	I1204 15:46:01.516341   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee1740c2cc4"
	I1204 15:46:04.030545   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:46:09.033621   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:46:09.034191   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:46:09.074254   10206 logs.go:282] 1 containers: [7c4247944179]
	I1204 15:46:09.074422   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:46:09.096801   10206 logs.go:282] 1 containers: [fa33aed93a73]
	I1204 15:46:09.096932   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:46:09.117336   10206 logs.go:282] 4 containers: [6ce348ceaedd 464f9e7e003f 872c1193c45e 3333718a6099]
	I1204 15:46:09.117426   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:46:09.129602   10206 logs.go:282] 1 containers: [3d43cf7548a9]
	I1204 15:46:09.129687   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:46:09.142093   10206 logs.go:282] 1 containers: [070b054a3a11]
	I1204 15:46:09.142162   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:46:09.152629   10206 logs.go:282] 1 containers: [7fd5007ee78f]
	I1204 15:46:09.152705   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:46:09.163307   10206 logs.go:282] 0 containers: []
	W1204 15:46:09.163318   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:46:09.163383   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:46:09.173932   10206 logs.go:282] 1 containers: [8ee1740c2cc4]
	I1204 15:46:09.173954   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:46:09.173959   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:46:09.210685   10206 logs.go:123] Gathering logs for etcd [fa33aed93a73] ...
	I1204 15:46:09.210692   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa33aed93a73"
	I1204 15:46:09.228185   10206 logs.go:123] Gathering logs for coredns [464f9e7e003f] ...
	I1204 15:46:09.228197   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 464f9e7e003f"
	I1204 15:46:09.239746   10206 logs.go:123] Gathering logs for kube-proxy [070b054a3a11] ...
	I1204 15:46:09.239760   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 070b054a3a11"
	I1204 15:46:09.251398   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:46:09.251410   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:46:09.275690   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:46:09.275697   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:46:09.312643   10206 logs.go:123] Gathering logs for kube-apiserver [7c4247944179] ...
	I1204 15:46:09.312657   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c4247944179"
	I1204 15:46:09.327105   10206 logs.go:123] Gathering logs for coredns [6ce348ceaedd] ...
	I1204 15:46:09.327118   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce348ceaedd"
	I1204 15:46:09.338858   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:46:09.338868   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:46:09.350313   10206 logs.go:123] Gathering logs for coredns [872c1193c45e] ...
	I1204 15:46:09.350326   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 872c1193c45e"
	I1204 15:46:09.366161   10206 logs.go:123] Gathering logs for kube-scheduler [3d43cf7548a9] ...
	I1204 15:46:09.366174   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d43cf7548a9"
	I1204 15:46:09.381262   10206 logs.go:123] Gathering logs for storage-provisioner [8ee1740c2cc4] ...
	I1204 15:46:09.381275   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee1740c2cc4"
	I1204 15:46:09.400716   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:46:09.400729   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:46:09.405187   10206 logs.go:123] Gathering logs for coredns [3333718a6099] ...
	I1204 15:46:09.405195   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3333718a6099"
	I1204 15:46:09.424032   10206 logs.go:123] Gathering logs for kube-controller-manager [7fd5007ee78f] ...
	I1204 15:46:09.424043   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd5007ee78f"
	I1204 15:46:11.943368   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:46:16.943848   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:46:16.944284   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1204 15:46:16.976350   10206 logs.go:282] 1 containers: [7c4247944179]
	I1204 15:46:16.976493   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1204 15:46:16.996381   10206 logs.go:282] 1 containers: [fa33aed93a73]
	I1204 15:46:16.996470   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1204 15:46:17.011020   10206 logs.go:282] 4 containers: [6ce348ceaedd 464f9e7e003f 872c1193c45e 3333718a6099]
	I1204 15:46:17.011114   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1204 15:46:17.022974   10206 logs.go:282] 1 containers: [3d43cf7548a9]
	I1204 15:46:17.023041   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1204 15:46:17.033627   10206 logs.go:282] 1 containers: [070b054a3a11]
	I1204 15:46:17.033714   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1204 15:46:17.044574   10206 logs.go:282] 1 containers: [7fd5007ee78f]
	I1204 15:46:17.044656   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1204 15:46:17.056472   10206 logs.go:282] 0 containers: []
	W1204 15:46:17.056484   10206 logs.go:284] No container was found matching "kindnet"
	I1204 15:46:17.056554   10206 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1204 15:46:17.066914   10206 logs.go:282] 1 containers: [8ee1740c2cc4]
	I1204 15:46:17.066932   10206 logs.go:123] Gathering logs for kubelet ...
	I1204 15:46:17.066936   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1204 15:46:17.104824   10206 logs.go:123] Gathering logs for describe nodes ...
	I1204 15:46:17.104831   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 15:46:17.139463   10206 logs.go:123] Gathering logs for coredns [6ce348ceaedd] ...
	I1204 15:46:17.139477   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ce348ceaedd"
	I1204 15:46:17.152193   10206 logs.go:123] Gathering logs for coredns [464f9e7e003f] ...
	I1204 15:46:17.152206   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 464f9e7e003f"
	I1204 15:46:17.164046   10206 logs.go:123] Gathering logs for kube-scheduler [3d43cf7548a9] ...
	I1204 15:46:17.164060   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3d43cf7548a9"
	I1204 15:46:17.181237   10206 logs.go:123] Gathering logs for dmesg ...
	I1204 15:46:17.181248   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 15:46:17.185887   10206 logs.go:123] Gathering logs for storage-provisioner [8ee1740c2cc4] ...
	I1204 15:46:17.185894   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8ee1740c2cc4"
	I1204 15:46:17.197735   10206 logs.go:123] Gathering logs for container status ...
	I1204 15:46:17.197745   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 15:46:17.209788   10206 logs.go:123] Gathering logs for coredns [872c1193c45e] ...
	I1204 15:46:17.209799   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 872c1193c45e"
	I1204 15:46:17.225818   10206 logs.go:123] Gathering logs for coredns [3333718a6099] ...
	I1204 15:46:17.225831   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3333718a6099"
	I1204 15:46:17.237647   10206 logs.go:123] Gathering logs for kube-controller-manager [7fd5007ee78f] ...
	I1204 15:46:17.237657   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7fd5007ee78f"
	I1204 15:46:17.255214   10206 logs.go:123] Gathering logs for kube-apiserver [7c4247944179] ...
	I1204 15:46:17.255225   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c4247944179"
	I1204 15:46:17.269666   10206 logs.go:123] Gathering logs for etcd [fa33aed93a73] ...
	I1204 15:46:17.269679   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa33aed93a73"
	I1204 15:46:17.283604   10206 logs.go:123] Gathering logs for kube-proxy [070b054a3a11] ...
	I1204 15:46:17.283616   10206 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 070b054a3a11"
	I1204 15:46:17.295424   10206 logs.go:123] Gathering logs for Docker ...
	I1204 15:46:17.295434   10206 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1204 15:46:19.821217   10206 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1204 15:46:24.823449   10206 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1204 15:46:24.835138   10206 out.go:201] 
	W1204 15:46:24.839224   10206 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1204 15:46:24.839230   10206 out.go:270] * 
	* 
	W1204 15:46:24.839701   10206 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 15:46:24.858157   10206 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-377000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (574.74s)

                                                
                                    
x
+
TestPause/serial/Start (9.98s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-219000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-219000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.923033875s)

                                                
                                                
-- stdout --
	* [pause-219000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-219000" primary control-plane node in "pause-219000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-219000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-219000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-219000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-219000 -n pause-219000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-219000 -n pause-219000: exit status 7 (59.563542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-219000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-750000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-750000 --driver=qemu2 : exit status 80 (9.833152167s)

                                                
                                                
-- stdout --
	* [NoKubernetes-750000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-750000" primary control-plane node in "NoKubernetes-750000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-750000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-750000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-750000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-750000 -n NoKubernetes-750000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-750000 -n NoKubernetes-750000: exit status 7 (67.44075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-750000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-750000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-750000 --no-kubernetes --driver=qemu2 : exit status 80 (5.255380333s)

                                                
                                                
-- stdout --
	* [NoKubernetes-750000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-750000
	* Restarting existing qemu2 VM for "NoKubernetes-750000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-750000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-750000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-750000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-750000 -n NoKubernetes-750000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-750000 -n NoKubernetes-750000: exit status 7 (34.786667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-750000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-750000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-750000 --no-kubernetes --driver=qemu2 : exit status 80 (5.245457875s)

                                                
                                                
-- stdout --
	* [NoKubernetes-750000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-750000
	* Restarting existing qemu2 VM for "NoKubernetes-750000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-750000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-750000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-750000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-750000 -n NoKubernetes-750000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-750000 -n NoKubernetes-750000: exit status 7 (71.091333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-750000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-750000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-750000 --driver=qemu2 : exit status 80 (5.297811209s)

                                                
                                                
-- stdout --
	* [NoKubernetes-750000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-750000
	* Restarting existing qemu2 VM for "NoKubernetes-750000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-750000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-750000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-750000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-750000 -n NoKubernetes-750000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-750000 -n NoKubernetes-750000: exit status 7 (62.37ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-750000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-667000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-667000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.90274075s)

                                                
                                                
-- stdout --
	* [auto-667000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-667000" primary control-plane node in "auto-667000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-667000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:44:33.177252   10717 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:44:33.177424   10717 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:44:33.177427   10717 out.go:358] Setting ErrFile to fd 2...
	I1204 15:44:33.177429   10717 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:44:33.177560   10717 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:44:33.178761   10717 out.go:352] Setting JSON to false
	I1204 15:44:33.196948   10717 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6243,"bootTime":1733349630,"procs":549,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 15:44:33.197015   10717 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:44:33.203779   10717 out.go:177] * [auto-667000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 15:44:33.211725   10717 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 15:44:33.211768   10717 notify.go:220] Checking for updates...
	I1204 15:44:33.219701   10717 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	I1204 15:44:33.222702   10717 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 15:44:33.225588   10717 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:44:33.228697   10717 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	I1204 15:44:33.231733   10717 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 15:44:33.233451   10717 config.go:182] Loaded profile config "multinode-093000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:44:33.233523   10717 config.go:182] Loaded profile config "stopped-upgrade-377000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1204 15:44:33.233574   10717 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:44:33.236713   10717 out.go:177] * Using the qemu2 driver based on user configuration
	I1204 15:44:33.243565   10717 start.go:297] selected driver: qemu2
	I1204 15:44:33.243571   10717 start.go:901] validating driver "qemu2" against <nil>
	I1204 15:44:33.243583   10717 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 15:44:33.246091   10717 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 15:44:33.249734   10717 out.go:177] * Automatically selected the socket_vmnet network
	I1204 15:44:33.252803   10717 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 15:44:33.252826   10717 cni.go:84] Creating CNI manager for ""
	I1204 15:44:33.252855   10717 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 15:44:33.252861   10717 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1204 15:44:33.252905   10717 start.go:340] cluster config:
	{Name:auto-667000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:auto-667000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:44:33.257726   10717 iso.go:125] acquiring lock: {Name:mk798ab7885b095925d55c843b0600e7ea181045 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:44:33.265718   10717 out.go:177] * Starting "auto-667000" primary control-plane node in "auto-667000" cluster
	I1204 15:44:33.269578   10717 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:44:33.269594   10717 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 15:44:33.269600   10717 cache.go:56] Caching tarball of preloaded images
	I1204 15:44:33.269684   10717 preload.go:172] Found /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 15:44:33.269696   10717 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 15:44:33.269758   10717 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/auto-667000/config.json ...
	I1204 15:44:33.269775   10717 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/auto-667000/config.json: {Name:mkf4fcdc3373a7e8bf17f292bc0b87bedf42a991 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:44:33.270238   10717 start.go:360] acquireMachinesLock for auto-667000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:44:33.270285   10717 start.go:364] duration metric: took 41.584µs to acquireMachinesLock for "auto-667000"
	I1204 15:44:33.270297   10717 start.go:93] Provisioning new machine with config: &{Name:auto-667000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:auto-667000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:44:33.270324   10717 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 15:44:33.278643   10717 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1204 15:44:33.295779   10717 start.go:159] libmachine.API.Create for "auto-667000" (driver="qemu2")
	I1204 15:44:33.295814   10717 client.go:168] LocalClient.Create starting
	I1204 15:44:33.295889   10717 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem
	I1204 15:44:33.295934   10717 main.go:141] libmachine: Decoding PEM data...
	I1204 15:44:33.295950   10717 main.go:141] libmachine: Parsing certificate...
	I1204 15:44:33.295987   10717 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem
	I1204 15:44:33.296018   10717 main.go:141] libmachine: Decoding PEM data...
	I1204 15:44:33.296029   10717 main.go:141] libmachine: Parsing certificate...
	I1204 15:44:33.296561   10717 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-6982/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 15:44:33.458180   10717 main.go:141] libmachine: Creating SSH key...
	I1204 15:44:33.564820   10717 main.go:141] libmachine: Creating Disk image...
	I1204 15:44:33.564832   10717 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 15:44:33.565041   10717 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/auto-667000/disk.qcow2.raw /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/auto-667000/disk.qcow2
	I1204 15:44:33.575031   10717 main.go:141] libmachine: STDOUT: 
	I1204 15:44:33.575054   10717 main.go:141] libmachine: STDERR: 
	I1204 15:44:33.575110   10717 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/auto-667000/disk.qcow2 +20000M
	I1204 15:44:33.584295   10717 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 15:44:33.584308   10717 main.go:141] libmachine: STDERR: 
	I1204 15:44:33.584326   10717 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/auto-667000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/auto-667000/disk.qcow2
	I1204 15:44:33.584330   10717 main.go:141] libmachine: Starting QEMU VM...
	I1204 15:44:33.584343   10717 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:44:33.584367   10717 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/auto-667000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/auto-667000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/auto-667000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:8c:d0:48:7b:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/auto-667000/disk.qcow2
	I1204 15:44:33.586267   10717 main.go:141] libmachine: STDOUT: 
	I1204 15:44:33.586282   10717 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:44:33.586302   10717 client.go:171] duration metric: took 290.478541ms to LocalClient.Create
	I1204 15:44:35.588525   10717 start.go:128] duration metric: took 2.318149041s to createHost
	I1204 15:44:35.588621   10717 start.go:83] releasing machines lock for "auto-667000", held for 2.318307458s
	W1204 15:44:35.588708   10717 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:44:35.601371   10717 out.go:177] * Deleting "auto-667000" in qemu2 ...
	W1204 15:44:35.625352   10717 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:44:35.625386   10717 start.go:729] Will try again in 5 seconds ...
	I1204 15:44:40.627564   10717 start.go:360] acquireMachinesLock for auto-667000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:44:40.627840   10717 start.go:364] duration metric: took 235.25µs to acquireMachinesLock for "auto-667000"
	I1204 15:44:40.627915   10717 start.go:93] Provisioning new machine with config: &{Name:auto-667000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:auto-667000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:44:40.628056   10717 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 15:44:40.639567   10717 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1204 15:44:40.668995   10717 start.go:159] libmachine.API.Create for "auto-667000" (driver="qemu2")
	I1204 15:44:40.669030   10717 client.go:168] LocalClient.Create starting
	I1204 15:44:40.669135   10717 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem
	I1204 15:44:40.669198   10717 main.go:141] libmachine: Decoding PEM data...
	I1204 15:44:40.669219   10717 main.go:141] libmachine: Parsing certificate...
	I1204 15:44:40.669274   10717 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem
	I1204 15:44:40.669318   10717 main.go:141] libmachine: Decoding PEM data...
	I1204 15:44:40.669328   10717 main.go:141] libmachine: Parsing certificate...
	I1204 15:44:40.670414   10717 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-6982/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 15:44:40.834617   10717 main.go:141] libmachine: Creating SSH key...
	I1204 15:44:40.979851   10717 main.go:141] libmachine: Creating Disk image...
	I1204 15:44:40.979862   10717 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 15:44:40.980076   10717 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/auto-667000/disk.qcow2.raw /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/auto-667000/disk.qcow2
	I1204 15:44:40.990658   10717 main.go:141] libmachine: STDOUT: 
	I1204 15:44:40.990684   10717 main.go:141] libmachine: STDERR: 
	I1204 15:44:40.990744   10717 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/auto-667000/disk.qcow2 +20000M
	I1204 15:44:40.999523   10717 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 15:44:40.999538   10717 main.go:141] libmachine: STDERR: 
	I1204 15:44:40.999551   10717 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/auto-667000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/auto-667000/disk.qcow2
	I1204 15:44:40.999554   10717 main.go:141] libmachine: Starting QEMU VM...
	I1204 15:44:40.999565   10717 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:44:40.999590   10717 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/auto-667000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/auto-667000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/auto-667000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:7e:48:a9:2a:de -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/auto-667000/disk.qcow2
	I1204 15:44:41.001533   10717 main.go:141] libmachine: STDOUT: 
	I1204 15:44:41.001546   10717 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:44:41.001556   10717 client.go:171] duration metric: took 332.517833ms to LocalClient.Create
	I1204 15:44:43.003779   10717 start.go:128] duration metric: took 2.375667875s to createHost
	I1204 15:44:43.003897   10717 start.go:83] releasing machines lock for "auto-667000", held for 2.376008375s
	W1204 15:44:43.004236   10717 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-667000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-667000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:44:43.014046   10717 out.go:201] 
	W1204 15:44:43.022097   10717 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 15:44:43.022131   10717 out.go:270] * 
	* 
	W1204 15:44:43.025308   10717 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 15:44:43.035001   10717 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-667000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-667000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.951449125s)

                                                
                                                
-- stdout --
	* [kindnet-667000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-667000" primary control-plane node in "kindnet-667000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-667000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:44:45.451451   10834 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:44:45.451627   10834 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:44:45.451630   10834 out.go:358] Setting ErrFile to fd 2...
	I1204 15:44:45.451632   10834 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:44:45.451781   10834 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:44:45.453031   10834 out.go:352] Setting JSON to false
	I1204 15:44:45.471103   10834 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6255,"bootTime":1733349630,"procs":553,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 15:44:45.471177   10834 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:44:45.477723   10834 out.go:177] * [kindnet-667000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 15:44:45.486649   10834 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 15:44:45.486668   10834 notify.go:220] Checking for updates...
	I1204 15:44:45.493583   10834 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	I1204 15:44:45.496638   10834 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 15:44:45.499612   10834 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:44:45.502603   10834 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	I1204 15:44:45.505637   10834 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 15:44:45.507522   10834 config.go:182] Loaded profile config "multinode-093000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:44:45.507605   10834 config.go:182] Loaded profile config "stopped-upgrade-377000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1204 15:44:45.507652   10834 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:44:45.510600   10834 out.go:177] * Using the qemu2 driver based on user configuration
	I1204 15:44:45.517521   10834 start.go:297] selected driver: qemu2
	I1204 15:44:45.517533   10834 start.go:901] validating driver "qemu2" against <nil>
	I1204 15:44:45.517540   10834 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 15:44:45.520020   10834 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 15:44:45.523624   10834 out.go:177] * Automatically selected the socket_vmnet network
	I1204 15:44:45.526687   10834 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 15:44:45.526707   10834 cni.go:84] Creating CNI manager for "kindnet"
	I1204 15:44:45.526712   10834 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1204 15:44:45.526751   10834 start.go:340] cluster config:
	{Name:kindnet-667000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kindnet-667000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:44:45.531533   10834 iso.go:125] acquiring lock: {Name:mk798ab7885b095925d55c843b0600e7ea181045 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:44:45.539584   10834 out.go:177] * Starting "kindnet-667000" primary control-plane node in "kindnet-667000" cluster
	I1204 15:44:45.543684   10834 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:44:45.543697   10834 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 15:44:45.543702   10834 cache.go:56] Caching tarball of preloaded images
	I1204 15:44:45.543773   10834 preload.go:172] Found /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 15:44:45.543778   10834 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 15:44:45.543839   10834 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/kindnet-667000/config.json ...
	I1204 15:44:45.543850   10834 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/kindnet-667000/config.json: {Name:mk7c0877be32c34d517e48c13251ceda2bc0a168 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:44:45.544093   10834 start.go:360] acquireMachinesLock for kindnet-667000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:44:45.544138   10834 start.go:364] duration metric: took 37.625µs to acquireMachinesLock for "kindnet-667000"
	I1204 15:44:45.544150   10834 start.go:93] Provisioning new machine with config: &{Name:kindnet-667000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:kindnet-667000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:44:45.544181   10834 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 15:44:45.551611   10834 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1204 15:44:45.566212   10834 start.go:159] libmachine.API.Create for "kindnet-667000" (driver="qemu2")
	I1204 15:44:45.566241   10834 client.go:168] LocalClient.Create starting
	I1204 15:44:45.566316   10834 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem
	I1204 15:44:45.566360   10834 main.go:141] libmachine: Decoding PEM data...
	I1204 15:44:45.566374   10834 main.go:141] libmachine: Parsing certificate...
	I1204 15:44:45.566409   10834 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem
	I1204 15:44:45.566437   10834 main.go:141] libmachine: Decoding PEM data...
	I1204 15:44:45.566444   10834 main.go:141] libmachine: Parsing certificate...
	I1204 15:44:45.566792   10834 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-6982/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 15:44:45.726084   10834 main.go:141] libmachine: Creating SSH key...
	I1204 15:44:45.886370   10834 main.go:141] libmachine: Creating Disk image...
	I1204 15:44:45.886384   10834 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 15:44:45.886627   10834 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kindnet-667000/disk.qcow2.raw /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kindnet-667000/disk.qcow2
	I1204 15:44:45.896670   10834 main.go:141] libmachine: STDOUT: 
	I1204 15:44:45.896691   10834 main.go:141] libmachine: STDERR: 
	I1204 15:44:45.896745   10834 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kindnet-667000/disk.qcow2 +20000M
	I1204 15:44:45.905317   10834 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 15:44:45.905333   10834 main.go:141] libmachine: STDERR: 
	I1204 15:44:45.905349   10834 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kindnet-667000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kindnet-667000/disk.qcow2
	I1204 15:44:45.905355   10834 main.go:141] libmachine: Starting QEMU VM...
	I1204 15:44:45.905369   10834 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:44:45.905418   10834 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kindnet-667000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kindnet-667000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kindnet-667000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:a4:43:e8:68:7d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kindnet-667000/disk.qcow2
	I1204 15:44:45.907251   10834 main.go:141] libmachine: STDOUT: 
	I1204 15:44:45.907264   10834 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:44:45.907285   10834 client.go:171] duration metric: took 341.036417ms to LocalClient.Create
	I1204 15:44:47.909521   10834 start.go:128] duration metric: took 2.36528375s to createHost
	I1204 15:44:47.909619   10834 start.go:83] releasing machines lock for "kindnet-667000", held for 2.365449167s
	W1204 15:44:47.909726   10834 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:44:47.916186   10834 out.go:177] * Deleting "kindnet-667000" in qemu2 ...
	W1204 15:44:47.944389   10834 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:44:47.944427   10834 start.go:729] Will try again in 5 seconds ...
	I1204 15:44:52.946756   10834 start.go:360] acquireMachinesLock for kindnet-667000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:44:52.947370   10834 start.go:364] duration metric: took 465.417µs to acquireMachinesLock for "kindnet-667000"
	I1204 15:44:52.947506   10834 start.go:93] Provisioning new machine with config: &{Name:kindnet-667000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:kindnet-667000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:44:52.947874   10834 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 15:44:52.953670   10834 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1204 15:44:53.001169   10834 start.go:159] libmachine.API.Create for "kindnet-667000" (driver="qemu2")
	I1204 15:44:53.001224   10834 client.go:168] LocalClient.Create starting
	I1204 15:44:53.001385   10834 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem
	I1204 15:44:53.001462   10834 main.go:141] libmachine: Decoding PEM data...
	I1204 15:44:53.001481   10834 main.go:141] libmachine: Parsing certificate...
	I1204 15:44:53.001566   10834 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem
	I1204 15:44:53.001624   10834 main.go:141] libmachine: Decoding PEM data...
	I1204 15:44:53.001637   10834 main.go:141] libmachine: Parsing certificate...
	I1204 15:44:53.002546   10834 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-6982/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 15:44:53.177072   10834 main.go:141] libmachine: Creating SSH key...
	I1204 15:44:53.300539   10834 main.go:141] libmachine: Creating Disk image...
	I1204 15:44:53.300546   10834 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 15:44:53.300744   10834 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kindnet-667000/disk.qcow2.raw /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kindnet-667000/disk.qcow2
	I1204 15:44:53.310908   10834 main.go:141] libmachine: STDOUT: 
	I1204 15:44:53.310925   10834 main.go:141] libmachine: STDERR: 
	I1204 15:44:53.310990   10834 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kindnet-667000/disk.qcow2 +20000M
	I1204 15:44:53.319555   10834 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 15:44:53.319572   10834 main.go:141] libmachine: STDERR: 
	I1204 15:44:53.319586   10834 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kindnet-667000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kindnet-667000/disk.qcow2
	I1204 15:44:53.319591   10834 main.go:141] libmachine: Starting QEMU VM...
	I1204 15:44:53.319604   10834 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:44:53.319629   10834 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kindnet-667000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kindnet-667000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kindnet-667000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:09:61:a9:29:a3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kindnet-667000/disk.qcow2
	I1204 15:44:53.321489   10834 main.go:141] libmachine: STDOUT: 
	I1204 15:44:53.321504   10834 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:44:53.321516   10834 client.go:171] duration metric: took 320.285625ms to LocalClient.Create
	I1204 15:44:55.323868   10834 start.go:128] duration metric: took 2.375915042s to createHost
	I1204 15:44:55.323991   10834 start.go:83] releasing machines lock for "kindnet-667000", held for 2.37657s
	W1204 15:44:55.324365   10834 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-667000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-667000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:44:55.340071   10834 out.go:201] 
	W1204 15:44:55.344109   10834 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 15:44:55.344138   10834 out.go:270] * 
	* 
	W1204 15:44:55.346725   10834 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 15:44:55.356065   10834 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-667000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-667000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.900876709s)

                                                
                                                
-- stdout --
	* [calico-667000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-667000" primary control-plane node in "calico-667000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-667000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:44:57.862496   10949 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:44:57.862650   10949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:44:57.862653   10949 out.go:358] Setting ErrFile to fd 2...
	I1204 15:44:57.862656   10949 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:44:57.862809   10949 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:44:57.863976   10949 out.go:352] Setting JSON to false
	I1204 15:44:57.881761   10949 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6267,"bootTime":1733349630,"procs":549,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 15:44:57.881835   10949 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:44:57.888028   10949 out.go:177] * [calico-667000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 15:44:57.895956   10949 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 15:44:57.896044   10949 notify.go:220] Checking for updates...
	I1204 15:44:57.902950   10949 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	I1204 15:44:57.905910   10949 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 15:44:57.908991   10949 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:44:57.911882   10949 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	I1204 15:44:57.914986   10949 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 15:44:57.918252   10949 config.go:182] Loaded profile config "multinode-093000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:44:57.918340   10949 config.go:182] Loaded profile config "stopped-upgrade-377000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1204 15:44:57.918390   10949 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:44:57.921916   10949 out.go:177] * Using the qemu2 driver based on user configuration
	I1204 15:44:57.928942   10949 start.go:297] selected driver: qemu2
	I1204 15:44:57.928948   10949 start.go:901] validating driver "qemu2" against <nil>
	I1204 15:44:57.928956   10949 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 15:44:57.931346   10949 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 15:44:57.935000   10949 out.go:177] * Automatically selected the socket_vmnet network
	I1204 15:44:57.937989   10949 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 15:44:57.938011   10949 cni.go:84] Creating CNI manager for "calico"
	I1204 15:44:57.938022   10949 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I1204 15:44:57.938072   10949 start.go:340] cluster config:
	{Name:calico-667000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:calico-667000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:44:57.942375   10949 iso.go:125] acquiring lock: {Name:mk798ab7885b095925d55c843b0600e7ea181045 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:44:57.951002   10949 out.go:177] * Starting "calico-667000" primary control-plane node in "calico-667000" cluster
	I1204 15:44:57.954957   10949 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:44:57.954974   10949 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 15:44:57.954985   10949 cache.go:56] Caching tarball of preloaded images
	I1204 15:44:57.955059   10949 preload.go:172] Found /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 15:44:57.955064   10949 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 15:44:57.955115   10949 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/calico-667000/config.json ...
	I1204 15:44:57.955131   10949 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/calico-667000/config.json: {Name:mk0755d2b4c95ba4b1c580d8fa47b877a5026005 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:44:57.955451   10949 start.go:360] acquireMachinesLock for calico-667000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:44:57.955493   10949 start.go:364] duration metric: took 37.667µs to acquireMachinesLock for "calico-667000"
	I1204 15:44:57.955504   10949 start.go:93] Provisioning new machine with config: &{Name:calico-667000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:calico-667000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:44:57.955530   10949 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 15:44:57.963923   10949 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1204 15:44:57.978637   10949 start.go:159] libmachine.API.Create for "calico-667000" (driver="qemu2")
	I1204 15:44:57.978661   10949 client.go:168] LocalClient.Create starting
	I1204 15:44:57.978732   10949 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem
	I1204 15:44:57.978782   10949 main.go:141] libmachine: Decoding PEM data...
	I1204 15:44:57.978795   10949 main.go:141] libmachine: Parsing certificate...
	I1204 15:44:57.978835   10949 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem
	I1204 15:44:57.978866   10949 main.go:141] libmachine: Decoding PEM data...
	I1204 15:44:57.978878   10949 main.go:141] libmachine: Parsing certificate...
	I1204 15:44:57.979353   10949 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-6982/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 15:44:58.140238   10949 main.go:141] libmachine: Creating SSH key...
	I1204 15:44:58.268008   10949 main.go:141] libmachine: Creating Disk image...
	I1204 15:44:58.268023   10949 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 15:44:58.268259   10949 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/calico-667000/disk.qcow2.raw /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/calico-667000/disk.qcow2
	I1204 15:44:58.279574   10949 main.go:141] libmachine: STDOUT: 
	I1204 15:44:58.279594   10949 main.go:141] libmachine: STDERR: 
	I1204 15:44:58.279669   10949 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/calico-667000/disk.qcow2 +20000M
	I1204 15:44:58.289216   10949 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 15:44:58.289239   10949 main.go:141] libmachine: STDERR: 
	I1204 15:44:58.289255   10949 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/calico-667000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/calico-667000/disk.qcow2
	I1204 15:44:58.289263   10949 main.go:141] libmachine: Starting QEMU VM...
	I1204 15:44:58.289278   10949 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:44:58.289310   10949 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/calico-667000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/calico-667000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/calico-667000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:e5:e7:c9:08:35 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/calico-667000/disk.qcow2
	I1204 15:44:58.291525   10949 main.go:141] libmachine: STDOUT: 
	I1204 15:44:58.291542   10949 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:44:58.291569   10949 client.go:171] duration metric: took 312.899125ms to LocalClient.Create
	I1204 15:45:00.293794   10949 start.go:128] duration metric: took 2.338213334s to createHost
	I1204 15:45:00.293892   10949 start.go:83] releasing machines lock for "calico-667000", held for 2.338368084s
	W1204 15:45:00.294008   10949 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:45:00.307425   10949 out.go:177] * Deleting "calico-667000" in qemu2 ...
	W1204 15:45:00.334382   10949 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:45:00.334414   10949 start.go:729] Will try again in 5 seconds ...
	I1204 15:45:05.336697   10949 start.go:360] acquireMachinesLock for calico-667000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:45:05.337273   10949 start.go:364] duration metric: took 469.417µs to acquireMachinesLock for "calico-667000"
	I1204 15:45:05.337427   10949 start.go:93] Provisioning new machine with config: &{Name:calico-667000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:calico-667000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:45:05.337666   10949 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 15:45:05.346219   10949 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1204 15:45:05.396276   10949 start.go:159] libmachine.API.Create for "calico-667000" (driver="qemu2")
	I1204 15:45:05.396333   10949 client.go:168] LocalClient.Create starting
	I1204 15:45:05.396535   10949 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem
	I1204 15:45:05.396622   10949 main.go:141] libmachine: Decoding PEM data...
	I1204 15:45:05.396641   10949 main.go:141] libmachine: Parsing certificate...
	I1204 15:45:05.396702   10949 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem
	I1204 15:45:05.396763   10949 main.go:141] libmachine: Decoding PEM data...
	I1204 15:45:05.396778   10949 main.go:141] libmachine: Parsing certificate...
	I1204 15:45:05.397387   10949 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-6982/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 15:45:05.567808   10949 main.go:141] libmachine: Creating SSH key...
	I1204 15:45:05.665146   10949 main.go:141] libmachine: Creating Disk image...
	I1204 15:45:05.665153   10949 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 15:45:05.665359   10949 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/calico-667000/disk.qcow2.raw /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/calico-667000/disk.qcow2
	I1204 15:45:05.675586   10949 main.go:141] libmachine: STDOUT: 
	I1204 15:45:05.675609   10949 main.go:141] libmachine: STDERR: 
	I1204 15:45:05.675675   10949 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/calico-667000/disk.qcow2 +20000M
	I1204 15:45:05.684362   10949 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 15:45:05.684380   10949 main.go:141] libmachine: STDERR: 
	I1204 15:45:05.684395   10949 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/calico-667000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/calico-667000/disk.qcow2
	I1204 15:45:05.684402   10949 main.go:141] libmachine: Starting QEMU VM...
	I1204 15:45:05.684412   10949 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:45:05.684448   10949 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/calico-667000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/calico-667000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/calico-667000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:b6:16:a5:45:eb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/calico-667000/disk.qcow2
	I1204 15:45:05.686286   10949 main.go:141] libmachine: STDOUT: 
	I1204 15:45:05.686310   10949 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:45:05.686324   10949 client.go:171] duration metric: took 289.979209ms to LocalClient.Create
	I1204 15:45:07.688616   10949 start.go:128] duration metric: took 2.350877792s to createHost
	I1204 15:45:07.688685   10949 start.go:83] releasing machines lock for "calico-667000", held for 2.351367666s
	W1204 15:45:07.689032   10949 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-667000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-667000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:45:07.698705   10949 out.go:201] 
	W1204 15:45:07.709689   10949 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 15:45:07.709709   10949 out.go:270] * 
	* 
	W1204 15:45:07.711296   10949 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 15:45:07.721616   10949 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-667000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-667000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.984825959s)

                                                
                                                
-- stdout --
	* [custom-flannel-667000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-667000" primary control-plane node in "custom-flannel-667000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-667000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:45:10.294691   11075 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:45:10.294852   11075 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:45:10.294855   11075 out.go:358] Setting ErrFile to fd 2...
	I1204 15:45:10.294858   11075 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:45:10.294973   11075 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:45:10.296195   11075 out.go:352] Setting JSON to false
	I1204 15:45:10.314414   11075 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6280,"bootTime":1733349630,"procs":551,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 15:45:10.314479   11075 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:45:10.321752   11075 out.go:177] * [custom-flannel-667000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 15:45:10.329830   11075 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 15:45:10.329911   11075 notify.go:220] Checking for updates...
	I1204 15:45:10.336703   11075 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	I1204 15:45:10.339681   11075 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 15:45:10.342731   11075 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:45:10.345736   11075 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	I1204 15:45:10.348697   11075 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 15:45:10.352091   11075 config.go:182] Loaded profile config "multinode-093000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:45:10.352168   11075 config.go:182] Loaded profile config "stopped-upgrade-377000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1204 15:45:10.352224   11075 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:45:10.355691   11075 out.go:177] * Using the qemu2 driver based on user configuration
	I1204 15:45:10.362672   11075 start.go:297] selected driver: qemu2
	I1204 15:45:10.362678   11075 start.go:901] validating driver "qemu2" against <nil>
	I1204 15:45:10.362683   11075 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 15:45:10.365080   11075 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 15:45:10.369741   11075 out.go:177] * Automatically selected the socket_vmnet network
	I1204 15:45:10.372813   11075 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 15:45:10.372833   11075 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1204 15:45:10.372843   11075 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1204 15:45:10.372883   11075 start.go:340] cluster config:
	{Name:custom-flannel-667000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:custom-flannel-667000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:45:10.377352   11075 iso.go:125] acquiring lock: {Name:mk798ab7885b095925d55c843b0600e7ea181045 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:45:10.385696   11075 out.go:177] * Starting "custom-flannel-667000" primary control-plane node in "custom-flannel-667000" cluster
	I1204 15:45:10.389680   11075 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:45:10.389693   11075 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 15:45:10.389702   11075 cache.go:56] Caching tarball of preloaded images
	I1204 15:45:10.389768   11075 preload.go:172] Found /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 15:45:10.389773   11075 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 15:45:10.389827   11075 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/custom-flannel-667000/config.json ...
	I1204 15:45:10.389838   11075 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/custom-flannel-667000/config.json: {Name:mkbde7c81427418b3da9fad9ac43c3f5c906d327 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:45:10.390297   11075 start.go:360] acquireMachinesLock for custom-flannel-667000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:45:10.390347   11075 start.go:364] duration metric: took 40µs to acquireMachinesLock for "custom-flannel-667000"
	I1204 15:45:10.390359   11075 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-667000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.2 ClusterName:custom-flannel-667000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:45:10.390383   11075 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 15:45:10.398669   11075 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1204 15:45:10.415002   11075 start.go:159] libmachine.API.Create for "custom-flannel-667000" (driver="qemu2")
	I1204 15:45:10.415031   11075 client.go:168] LocalClient.Create starting
	I1204 15:45:10.415106   11075 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem
	I1204 15:45:10.415149   11075 main.go:141] libmachine: Decoding PEM data...
	I1204 15:45:10.415162   11075 main.go:141] libmachine: Parsing certificate...
	I1204 15:45:10.415201   11075 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem
	I1204 15:45:10.415230   11075 main.go:141] libmachine: Decoding PEM data...
	I1204 15:45:10.415240   11075 main.go:141] libmachine: Parsing certificate...
	I1204 15:45:10.415685   11075 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-6982/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 15:45:10.576999   11075 main.go:141] libmachine: Creating SSH key...
	I1204 15:45:10.656769   11075 main.go:141] libmachine: Creating Disk image...
	I1204 15:45:10.656775   11075 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 15:45:10.656972   11075 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/custom-flannel-667000/disk.qcow2.raw /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/custom-flannel-667000/disk.qcow2
	I1204 15:45:10.666867   11075 main.go:141] libmachine: STDOUT: 
	I1204 15:45:10.666885   11075 main.go:141] libmachine: STDERR: 
	I1204 15:45:10.666961   11075 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/custom-flannel-667000/disk.qcow2 +20000M
	I1204 15:45:10.675392   11075 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 15:45:10.675407   11075 main.go:141] libmachine: STDERR: 
	I1204 15:45:10.675428   11075 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/custom-flannel-667000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/custom-flannel-667000/disk.qcow2
	I1204 15:45:10.675436   11075 main.go:141] libmachine: Starting QEMU VM...
	I1204 15:45:10.675447   11075 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:45:10.675476   11075 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/custom-flannel-667000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/custom-flannel-667000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/custom-flannel-667000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:39:2c:bb:89:55 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/custom-flannel-667000/disk.qcow2
	I1204 15:45:10.677333   11075 main.go:141] libmachine: STDOUT: 
	I1204 15:45:10.677345   11075 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:45:10.677364   11075 client.go:171] duration metric: took 262.32475ms to LocalClient.Create
	I1204 15:45:12.679603   11075 start.go:128] duration metric: took 2.289168291s to createHost
	I1204 15:45:12.679718   11075 start.go:83] releasing machines lock for "custom-flannel-667000", held for 2.289339792s
	W1204 15:45:12.679808   11075 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:45:12.691057   11075 out.go:177] * Deleting "custom-flannel-667000" in qemu2 ...
	W1204 15:45:12.724066   11075 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:45:12.724093   11075 start.go:729] Will try again in 5 seconds ...
	I1204 15:45:17.726479   11075 start.go:360] acquireMachinesLock for custom-flannel-667000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:45:17.727077   11075 start.go:364] duration metric: took 451.708µs to acquireMachinesLock for "custom-flannel-667000"
	I1204 15:45:17.727210   11075 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-667000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.2 ClusterName:custom-flannel-667000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:45:17.727510   11075 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 15:45:17.738138   11075 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1204 15:45:17.783492   11075 start.go:159] libmachine.API.Create for "custom-flannel-667000" (driver="qemu2")
	I1204 15:45:17.783542   11075 client.go:168] LocalClient.Create starting
	I1204 15:45:17.783690   11075 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem
	I1204 15:45:17.783781   11075 main.go:141] libmachine: Decoding PEM data...
	I1204 15:45:17.783807   11075 main.go:141] libmachine: Parsing certificate...
	I1204 15:45:17.783870   11075 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem
	I1204 15:45:17.783936   11075 main.go:141] libmachine: Decoding PEM data...
	I1204 15:45:17.783950   11075 main.go:141] libmachine: Parsing certificate...
	I1204 15:45:17.784561   11075 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-6982/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 15:45:17.954953   11075 main.go:141] libmachine: Creating SSH key...
	I1204 15:45:18.174839   11075 main.go:141] libmachine: Creating Disk image...
	I1204 15:45:18.174858   11075 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 15:45:18.175128   11075 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/custom-flannel-667000/disk.qcow2.raw /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/custom-flannel-667000/disk.qcow2
	I1204 15:45:18.185759   11075 main.go:141] libmachine: STDOUT: 
	I1204 15:45:18.185776   11075 main.go:141] libmachine: STDERR: 
	I1204 15:45:18.185843   11075 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/custom-flannel-667000/disk.qcow2 +20000M
	I1204 15:45:18.194886   11075 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 15:45:18.194903   11075 main.go:141] libmachine: STDERR: 
	I1204 15:45:18.194914   11075 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/custom-flannel-667000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/custom-flannel-667000/disk.qcow2
	I1204 15:45:18.194920   11075 main.go:141] libmachine: Starting QEMU VM...
	I1204 15:45:18.194931   11075 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:45:18.194961   11075 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/custom-flannel-667000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/custom-flannel-667000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/custom-flannel-667000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:5c:c8:de:2e:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/custom-flannel-667000/disk.qcow2
	I1204 15:45:18.196885   11075 main.go:141] libmachine: STDOUT: 
	I1204 15:45:18.196898   11075 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:45:18.196910   11075 client.go:171] duration metric: took 413.359083ms to LocalClient.Create
	I1204 15:45:20.199152   11075 start.go:128] duration metric: took 2.471578708s to createHost
	I1204 15:45:20.199255   11075 start.go:83] releasing machines lock for "custom-flannel-667000", held for 2.472133958s
	W1204 15:45:20.199773   11075 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-667000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-667000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:45:20.215394   11075 out.go:201] 
	W1204 15:45:20.218553   11075 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 15:45:20.218695   11075 out.go:270] * 
	* 
	W1204 15:45:20.221370   11075 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 15:45:20.232543   11075 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-667000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-667000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.834717209s)

                                                
                                                
-- stdout --
	* [false-667000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-667000" primary control-plane node in "false-667000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-667000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:45:22.837560   11196 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:45:22.837703   11196 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:45:22.837706   11196 out.go:358] Setting ErrFile to fd 2...
	I1204 15:45:22.837709   11196 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:45:22.837840   11196 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:45:22.838970   11196 out.go:352] Setting JSON to false
	I1204 15:45:22.857027   11196 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6292,"bootTime":1733349630,"procs":550,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 15:45:22.857107   11196 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:45:22.863329   11196 out.go:177] * [false-667000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 15:45:22.871319   11196 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 15:45:22.871376   11196 notify.go:220] Checking for updates...
	I1204 15:45:22.879224   11196 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	I1204 15:45:22.882269   11196 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 15:45:22.886272   11196 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:45:22.889228   11196 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	I1204 15:45:22.892274   11196 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 15:45:22.895691   11196 config.go:182] Loaded profile config "multinode-093000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:45:22.895763   11196 config.go:182] Loaded profile config "stopped-upgrade-377000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1204 15:45:22.895809   11196 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:45:22.899256   11196 out.go:177] * Using the qemu2 driver based on user configuration
	I1204 15:45:22.906429   11196 start.go:297] selected driver: qemu2
	I1204 15:45:22.906436   11196 start.go:901] validating driver "qemu2" against <nil>
	I1204 15:45:22.906443   11196 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 15:45:22.909082   11196 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 15:45:22.912194   11196 out.go:177] * Automatically selected the socket_vmnet network
	I1204 15:45:22.915365   11196 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 15:45:22.915391   11196 cni.go:84] Creating CNI manager for "false"
	I1204 15:45:22.915419   11196 start.go:340] cluster config:
	{Name:false-667000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:false-667000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:45:22.920103   11196 iso.go:125] acquiring lock: {Name:mk798ab7885b095925d55c843b0600e7ea181045 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:45:22.928235   11196 out.go:177] * Starting "false-667000" primary control-plane node in "false-667000" cluster
	I1204 15:45:22.932276   11196 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:45:22.932295   11196 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 15:45:22.932304   11196 cache.go:56] Caching tarball of preloaded images
	I1204 15:45:22.932404   11196 preload.go:172] Found /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 15:45:22.932410   11196 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 15:45:22.932462   11196 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/false-667000/config.json ...
	I1204 15:45:22.932474   11196 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/false-667000/config.json: {Name:mk4da6e20f2f68f0160f9072cc7c7a58481cdc15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:45:22.932966   11196 start.go:360] acquireMachinesLock for false-667000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:45:22.933018   11196 start.go:364] duration metric: took 45.25µs to acquireMachinesLock for "false-667000"
	I1204 15:45:22.933032   11196 start.go:93] Provisioning new machine with config: &{Name:false-667000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:false-667000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:45:22.933074   11196 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 15:45:22.937267   11196 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1204 15:45:22.955296   11196 start.go:159] libmachine.API.Create for "false-667000" (driver="qemu2")
	I1204 15:45:22.955328   11196 client.go:168] LocalClient.Create starting
	I1204 15:45:22.955424   11196 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem
	I1204 15:45:22.955468   11196 main.go:141] libmachine: Decoding PEM data...
	I1204 15:45:22.955477   11196 main.go:141] libmachine: Parsing certificate...
	I1204 15:45:22.955516   11196 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem
	I1204 15:45:22.955548   11196 main.go:141] libmachine: Decoding PEM data...
	I1204 15:45:22.955558   11196 main.go:141] libmachine: Parsing certificate...
	I1204 15:45:22.955937   11196 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-6982/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 15:45:23.115891   11196 main.go:141] libmachine: Creating SSH key...
	I1204 15:45:23.233427   11196 main.go:141] libmachine: Creating Disk image...
	I1204 15:45:23.233434   11196 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 15:45:23.233640   11196 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/false-667000/disk.qcow2.raw /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/false-667000/disk.qcow2
	I1204 15:45:23.243852   11196 main.go:141] libmachine: STDOUT: 
	I1204 15:45:23.243882   11196 main.go:141] libmachine: STDERR: 
	I1204 15:45:23.243947   11196 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/false-667000/disk.qcow2 +20000M
	I1204 15:45:23.252792   11196 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 15:45:23.252810   11196 main.go:141] libmachine: STDERR: 
	I1204 15:45:23.252836   11196 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/false-667000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/false-667000/disk.qcow2
	I1204 15:45:23.252842   11196 main.go:141] libmachine: Starting QEMU VM...
	I1204 15:45:23.252852   11196 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:45:23.252878   11196 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/false-667000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/false-667000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/false-667000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:5a:78:b6:ac:0e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/false-667000/disk.qcow2
	I1204 15:45:23.254707   11196 main.go:141] libmachine: STDOUT: 
	I1204 15:45:23.254720   11196 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:45:23.254739   11196 client.go:171] duration metric: took 299.402792ms to LocalClient.Create
	I1204 15:45:25.256987   11196 start.go:128] duration metric: took 2.323863041s to createHost
	I1204 15:45:25.257051   11196 start.go:83] releasing machines lock for "false-667000", held for 2.324000375s
	W1204 15:45:25.257126   11196 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:45:25.268426   11196 out.go:177] * Deleting "false-667000" in qemu2 ...
	W1204 15:45:25.298964   11196 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:45:25.298997   11196 start.go:729] Will try again in 5 seconds ...
	I1204 15:45:30.301218   11196 start.go:360] acquireMachinesLock for false-667000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:45:30.301788   11196 start.go:364] duration metric: took 485.667µs to acquireMachinesLock for "false-667000"
	I1204 15:45:30.301854   11196 start.go:93] Provisioning new machine with config: &{Name:false-667000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:false-667000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:45:30.302177   11196 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 15:45:30.314914   11196 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1204 15:45:30.364799   11196 start.go:159] libmachine.API.Create for "false-667000" (driver="qemu2")
	I1204 15:45:30.364861   11196 client.go:168] LocalClient.Create starting
	I1204 15:45:30.365012   11196 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem
	I1204 15:45:30.365088   11196 main.go:141] libmachine: Decoding PEM data...
	I1204 15:45:30.365106   11196 main.go:141] libmachine: Parsing certificate...
	I1204 15:45:30.365169   11196 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem
	I1204 15:45:30.365226   11196 main.go:141] libmachine: Decoding PEM data...
	I1204 15:45:30.365243   11196 main.go:141] libmachine: Parsing certificate...
	I1204 15:45:30.365974   11196 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-6982/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 15:45:30.538145   11196 main.go:141] libmachine: Creating SSH key...
	I1204 15:45:30.570478   11196 main.go:141] libmachine: Creating Disk image...
	I1204 15:45:30.570484   11196 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 15:45:30.570682   11196 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/false-667000/disk.qcow2.raw /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/false-667000/disk.qcow2
	I1204 15:45:30.580638   11196 main.go:141] libmachine: STDOUT: 
	I1204 15:45:30.580655   11196 main.go:141] libmachine: STDERR: 
	I1204 15:45:30.580710   11196 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/false-667000/disk.qcow2 +20000M
	I1204 15:45:30.589466   11196 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 15:45:30.589482   11196 main.go:141] libmachine: STDERR: 
	I1204 15:45:30.589493   11196 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/false-667000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/false-667000/disk.qcow2
	I1204 15:45:30.589516   11196 main.go:141] libmachine: Starting QEMU VM...
	I1204 15:45:30.589526   11196 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:45:30.589556   11196 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/false-667000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/false-667000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/false-667000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:59:5f:8f:aa:c2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/false-667000/disk.qcow2
	I1204 15:45:30.591464   11196 main.go:141] libmachine: STDOUT: 
	I1204 15:45:30.591479   11196 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:45:30.591490   11196 client.go:171] duration metric: took 226.620709ms to LocalClient.Create
	I1204 15:45:32.593729   11196 start.go:128] duration metric: took 2.291488791s to createHost
	I1204 15:45:32.593807   11196 start.go:83] releasing machines lock for "false-667000", held for 2.291973792s
	W1204 15:45:32.594188   11196 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-667000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-667000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:45:32.604872   11196 out.go:201] 
	W1204 15:45:32.614977   11196 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 15:45:32.615026   11196 out.go:270] * 
	* 
	W1204 15:45:32.617757   11196 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 15:45:32.626841   11196 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (10.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-667000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-667000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (10.050909416s)

                                                
                                                
-- stdout --
	* [enable-default-cni-667000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-667000" primary control-plane node in "enable-default-cni-667000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-667000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:45:34.947824   11307 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:45:34.947973   11307 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:45:34.947976   11307 out.go:358] Setting ErrFile to fd 2...
	I1204 15:45:34.947978   11307 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:45:34.948102   11307 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:45:34.949292   11307 out.go:352] Setting JSON to false
	I1204 15:45:34.967708   11307 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6304,"bootTime":1733349630,"procs":548,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 15:45:34.967797   11307 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:45:34.973298   11307 out.go:177] * [enable-default-cni-667000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 15:45:34.981179   11307 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 15:45:34.981251   11307 notify.go:220] Checking for updates...
	I1204 15:45:34.989152   11307 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	I1204 15:45:34.992111   11307 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 15:45:34.996125   11307 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:45:34.999264   11307 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	I1204 15:45:35.002157   11307 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 15:45:35.005497   11307 config.go:182] Loaded profile config "multinode-093000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:45:35.005569   11307 config.go:182] Loaded profile config "stopped-upgrade-377000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1204 15:45:35.005619   11307 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:45:35.009136   11307 out.go:177] * Using the qemu2 driver based on user configuration
	I1204 15:45:35.016158   11307 start.go:297] selected driver: qemu2
	I1204 15:45:35.016165   11307 start.go:901] validating driver "qemu2" against <nil>
	I1204 15:45:35.016172   11307 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 15:45:35.018664   11307 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 15:45:35.023073   11307 out.go:177] * Automatically selected the socket_vmnet network
	E1204 15:45:35.026145   11307 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1204 15:45:35.026159   11307 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 15:45:35.026173   11307 cni.go:84] Creating CNI manager for "bridge"
	I1204 15:45:35.026177   11307 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1204 15:45:35.026214   11307 start.go:340] cluster config:
	{Name:enable-default-cni-667000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:enable-default-cni-667000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:45:35.031034   11307 iso.go:125] acquiring lock: {Name:mk798ab7885b095925d55c843b0600e7ea181045 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:45:35.039122   11307 out.go:177] * Starting "enable-default-cni-667000" primary control-plane node in "enable-default-cni-667000" cluster
	I1204 15:45:35.043171   11307 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:45:35.043188   11307 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 15:45:35.043197   11307 cache.go:56] Caching tarball of preloaded images
	I1204 15:45:35.043279   11307 preload.go:172] Found /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 15:45:35.043284   11307 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 15:45:35.043348   11307 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/enable-default-cni-667000/config.json ...
	I1204 15:45:35.043365   11307 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/enable-default-cni-667000/config.json: {Name:mkec636159125acdb7e41691de864f8967ab63d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:45:35.043811   11307 start.go:360] acquireMachinesLock for enable-default-cni-667000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:45:35.043860   11307 start.go:364] duration metric: took 41.333µs to acquireMachinesLock for "enable-default-cni-667000"
	I1204 15:45:35.043873   11307 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-667000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:enable-default-cni-667000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:45:35.043898   11307 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 15:45:35.052167   11307 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1204 15:45:35.069869   11307 start.go:159] libmachine.API.Create for "enable-default-cni-667000" (driver="qemu2")
	I1204 15:45:35.069903   11307 client.go:168] LocalClient.Create starting
	I1204 15:45:35.069992   11307 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem
	I1204 15:45:35.070032   11307 main.go:141] libmachine: Decoding PEM data...
	I1204 15:45:35.070047   11307 main.go:141] libmachine: Parsing certificate...
	I1204 15:45:35.070087   11307 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem
	I1204 15:45:35.070117   11307 main.go:141] libmachine: Decoding PEM data...
	I1204 15:45:35.070125   11307 main.go:141] libmachine: Parsing certificate...
	I1204 15:45:35.070523   11307 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-6982/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 15:45:35.229945   11307 main.go:141] libmachine: Creating SSH key...
	I1204 15:45:35.392867   11307 main.go:141] libmachine: Creating Disk image...
	I1204 15:45:35.392880   11307 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 15:45:35.393092   11307 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/enable-default-cni-667000/disk.qcow2.raw /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/enable-default-cni-667000/disk.qcow2
	I1204 15:45:35.403688   11307 main.go:141] libmachine: STDOUT: 
	I1204 15:45:35.403711   11307 main.go:141] libmachine: STDERR: 
	I1204 15:45:35.403782   11307 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/enable-default-cni-667000/disk.qcow2 +20000M
	I1204 15:45:35.412500   11307 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 15:45:35.412517   11307 main.go:141] libmachine: STDERR: 
	I1204 15:45:35.412532   11307 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/enable-default-cni-667000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/enable-default-cni-667000/disk.qcow2
	I1204 15:45:35.412536   11307 main.go:141] libmachine: Starting QEMU VM...
	I1204 15:45:35.412548   11307 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:45:35.412586   11307 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/enable-default-cni-667000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/enable-default-cni-667000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/enable-default-cni-667000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:a6:6c:80:0d:d8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/enable-default-cni-667000/disk.qcow2
	I1204 15:45:35.414459   11307 main.go:141] libmachine: STDOUT: 
	I1204 15:45:35.414472   11307 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:45:35.414492   11307 client.go:171] duration metric: took 344.577583ms to LocalClient.Create
	I1204 15:45:37.416632   11307 start.go:128] duration metric: took 2.37269975s to createHost
	I1204 15:45:37.416664   11307 start.go:83] releasing machines lock for "enable-default-cni-667000", held for 2.372777167s
	W1204 15:45:37.416702   11307 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:45:37.422217   11307 out.go:177] * Deleting "enable-default-cni-667000" in qemu2 ...
	W1204 15:45:37.450068   11307 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:45:37.450080   11307 start.go:729] Will try again in 5 seconds ...
	I1204 15:45:42.452475   11307 start.go:360] acquireMachinesLock for enable-default-cni-667000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:45:42.453099   11307 start.go:364] duration metric: took 513.875µs to acquireMachinesLock for "enable-default-cni-667000"
	I1204 15:45:42.453242   11307 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-667000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:enable-default-cni-667000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:45:42.453649   11307 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 15:45:42.471156   11307 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1204 15:45:42.520153   11307 start.go:159] libmachine.API.Create for "enable-default-cni-667000" (driver="qemu2")
	I1204 15:45:42.520218   11307 client.go:168] LocalClient.Create starting
	I1204 15:45:42.520374   11307 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem
	I1204 15:45:42.520479   11307 main.go:141] libmachine: Decoding PEM data...
	I1204 15:45:42.520498   11307 main.go:141] libmachine: Parsing certificate...
	I1204 15:45:42.520563   11307 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem
	I1204 15:45:42.520626   11307 main.go:141] libmachine: Decoding PEM data...
	I1204 15:45:42.520643   11307 main.go:141] libmachine: Parsing certificate...
	I1204 15:45:42.521493   11307 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-6982/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 15:45:42.690051   11307 main.go:141] libmachine: Creating SSH key...
	I1204 15:45:42.893767   11307 main.go:141] libmachine: Creating Disk image...
	I1204 15:45:42.893777   11307 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 15:45:42.893976   11307 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/enable-default-cni-667000/disk.qcow2.raw /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/enable-default-cni-667000/disk.qcow2
	I1204 15:45:42.904006   11307 main.go:141] libmachine: STDOUT: 
	I1204 15:45:42.904028   11307 main.go:141] libmachine: STDERR: 
	I1204 15:45:42.904105   11307 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/enable-default-cni-667000/disk.qcow2 +20000M
	I1204 15:45:42.912826   11307 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 15:45:42.912841   11307 main.go:141] libmachine: STDERR: 
	I1204 15:45:42.912898   11307 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/enable-default-cni-667000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/enable-default-cni-667000/disk.qcow2
	I1204 15:45:42.912903   11307 main.go:141] libmachine: Starting QEMU VM...
	I1204 15:45:42.912915   11307 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:45:42.912949   11307 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/enable-default-cni-667000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/enable-default-cni-667000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/enable-default-cni-667000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:a1:65:f9:28:36 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/enable-default-cni-667000/disk.qcow2
	I1204 15:45:42.914804   11307 main.go:141] libmachine: STDOUT: 
	I1204 15:45:42.914818   11307 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:45:42.914830   11307 client.go:171] duration metric: took 394.603292ms to LocalClient.Create
	I1204 15:45:44.917082   11307 start.go:128] duration metric: took 2.463363917s to createHost
	I1204 15:45:44.917155   11307 start.go:83] releasing machines lock for "enable-default-cni-667000", held for 2.464009375s
	W1204 15:45:44.917633   11307 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-667000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-667000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:45:44.933694   11307 out.go:201] 
	W1204 15:45:44.936795   11307 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 15:45:44.936850   11307 out.go:270] * 
	* 
	W1204 15:45:44.939708   11307 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 15:45:44.953785   11307 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (10.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (10.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-667000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-667000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (10.111559292s)

                                                
                                                
-- stdout --
	* [flannel-667000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-667000" primary control-plane node in "flannel-667000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-667000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:45:47.310005   11419 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:45:47.310173   11419 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:45:47.310177   11419 out.go:358] Setting ErrFile to fd 2...
	I1204 15:45:47.310179   11419 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:45:47.310306   11419 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:45:47.311463   11419 out.go:352] Setting JSON to false
	I1204 15:45:47.329912   11419 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6317,"bootTime":1733349630,"procs":549,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 15:45:47.329989   11419 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:45:47.335404   11419 out.go:177] * [flannel-667000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 15:45:47.342372   11419 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 15:45:47.342440   11419 notify.go:220] Checking for updates...
	I1204 15:45:47.351451   11419 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	I1204 15:45:47.355352   11419 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 15:45:47.359404   11419 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:45:47.362405   11419 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	I1204 15:45:47.365379   11419 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 15:45:47.368771   11419 config.go:182] Loaded profile config "multinode-093000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:45:47.368844   11419 config.go:182] Loaded profile config "stopped-upgrade-377000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1204 15:45:47.368891   11419 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:45:47.373413   11419 out.go:177] * Using the qemu2 driver based on user configuration
	I1204 15:45:47.380366   11419 start.go:297] selected driver: qemu2
	I1204 15:45:47.380371   11419 start.go:901] validating driver "qemu2" against <nil>
	I1204 15:45:47.380377   11419 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 15:45:47.382878   11419 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 15:45:47.385462   11419 out.go:177] * Automatically selected the socket_vmnet network
	I1204 15:45:47.393457   11419 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 15:45:47.393476   11419 cni.go:84] Creating CNI manager for "flannel"
	I1204 15:45:47.393480   11419 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I1204 15:45:47.393517   11419 start.go:340] cluster config:
	{Name:flannel-667000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:flannel-667000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:45:47.398540   11419 iso.go:125] acquiring lock: {Name:mk798ab7885b095925d55c843b0600e7ea181045 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:45:47.406373   11419 out.go:177] * Starting "flannel-667000" primary control-plane node in "flannel-667000" cluster
	I1204 15:45:47.410388   11419 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:45:47.410406   11419 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 15:45:47.410412   11419 cache.go:56] Caching tarball of preloaded images
	I1204 15:45:47.410485   11419 preload.go:172] Found /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 15:45:47.410490   11419 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 15:45:47.410546   11419 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/flannel-667000/config.json ...
	I1204 15:45:47.410557   11419 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/flannel-667000/config.json: {Name:mk0faf61b41025921d217c78029ee37a4a056461 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:45:47.410985   11419 start.go:360] acquireMachinesLock for flannel-667000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:45:47.411027   11419 start.go:364] duration metric: took 37.584µs to acquireMachinesLock for "flannel-667000"
	I1204 15:45:47.411039   11419 start.go:93] Provisioning new machine with config: &{Name:flannel-667000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:flannel-667000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:45:47.411077   11419 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 15:45:47.415362   11419 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1204 15:45:47.430237   11419 start.go:159] libmachine.API.Create for "flannel-667000" (driver="qemu2")
	I1204 15:45:47.430272   11419 client.go:168] LocalClient.Create starting
	I1204 15:45:47.430351   11419 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem
	I1204 15:45:47.430389   11419 main.go:141] libmachine: Decoding PEM data...
	I1204 15:45:47.430405   11419 main.go:141] libmachine: Parsing certificate...
	I1204 15:45:47.430440   11419 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem
	I1204 15:45:47.430470   11419 main.go:141] libmachine: Decoding PEM data...
	I1204 15:45:47.430479   11419 main.go:141] libmachine: Parsing certificate...
	I1204 15:45:47.430939   11419 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-6982/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 15:45:47.590137   11419 main.go:141] libmachine: Creating SSH key...
	I1204 15:45:47.943011   11419 main.go:141] libmachine: Creating Disk image...
	I1204 15:45:47.943023   11419 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 15:45:47.943256   11419 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/flannel-667000/disk.qcow2.raw /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/flannel-667000/disk.qcow2
	I1204 15:45:47.953921   11419 main.go:141] libmachine: STDOUT: 
	I1204 15:45:47.953942   11419 main.go:141] libmachine: STDERR: 
	I1204 15:45:47.954010   11419 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/flannel-667000/disk.qcow2 +20000M
	I1204 15:45:47.962804   11419 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 15:45:47.962828   11419 main.go:141] libmachine: STDERR: 
	I1204 15:45:47.962849   11419 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/flannel-667000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/flannel-667000/disk.qcow2
	I1204 15:45:47.962854   11419 main.go:141] libmachine: Starting QEMU VM...
	I1204 15:45:47.962866   11419 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:45:47.962906   11419 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/flannel-667000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/flannel-667000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/flannel-667000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:84:ae:aa:59:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/flannel-667000/disk.qcow2
	I1204 15:45:47.964768   11419 main.go:141] libmachine: STDOUT: 
	I1204 15:45:47.964784   11419 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:45:47.964808   11419 client.go:171] duration metric: took 534.526583ms to LocalClient.Create
	I1204 15:45:49.967048   11419 start.go:128] duration metric: took 2.555909833s to createHost
	I1204 15:45:49.967150   11419 start.go:83] releasing machines lock for "flannel-667000", held for 2.556089125s
	W1204 15:45:49.967227   11419 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:45:49.982794   11419 out.go:177] * Deleting "flannel-667000" in qemu2 ...
	W1204 15:45:50.010493   11419 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:45:50.010545   11419 start.go:729] Will try again in 5 seconds ...
	I1204 15:45:55.012878   11419 start.go:360] acquireMachinesLock for flannel-667000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:45:55.013621   11419 start.go:364] duration metric: took 557.333µs to acquireMachinesLock for "flannel-667000"
	I1204 15:45:55.013692   11419 start.go:93] Provisioning new machine with config: &{Name:flannel-667000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:flannel-667000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:45:55.013975   11419 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 15:45:55.025776   11419 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1204 15:45:55.073098   11419 start.go:159] libmachine.API.Create for "flannel-667000" (driver="qemu2")
	I1204 15:45:55.073139   11419 client.go:168] LocalClient.Create starting
	I1204 15:45:55.073338   11419 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem
	I1204 15:45:55.073439   11419 main.go:141] libmachine: Decoding PEM data...
	I1204 15:45:55.073455   11419 main.go:141] libmachine: Parsing certificate...
	I1204 15:45:55.073525   11419 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem
	I1204 15:45:55.073581   11419 main.go:141] libmachine: Decoding PEM data...
	I1204 15:45:55.073593   11419 main.go:141] libmachine: Parsing certificate...
	I1204 15:45:55.074402   11419 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-6982/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 15:45:55.244606   11419 main.go:141] libmachine: Creating SSH key...
	I1204 15:45:55.318145   11419 main.go:141] libmachine: Creating Disk image...
	I1204 15:45:55.318155   11419 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 15:45:55.318376   11419 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/flannel-667000/disk.qcow2.raw /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/flannel-667000/disk.qcow2
	I1204 15:45:55.328804   11419 main.go:141] libmachine: STDOUT: 
	I1204 15:45:55.328824   11419 main.go:141] libmachine: STDERR: 
	I1204 15:45:55.328894   11419 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/flannel-667000/disk.qcow2 +20000M
	I1204 15:45:55.338161   11419 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 15:45:55.338181   11419 main.go:141] libmachine: STDERR: 
	I1204 15:45:55.338191   11419 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/flannel-667000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/flannel-667000/disk.qcow2
	I1204 15:45:55.338196   11419 main.go:141] libmachine: Starting QEMU VM...
	I1204 15:45:55.338208   11419 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:45:55.338238   11419 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/flannel-667000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/flannel-667000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/flannel-667000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:00:15:96:4d:05 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/flannel-667000/disk.qcow2
	I1204 15:45:55.340131   11419 main.go:141] libmachine: STDOUT: 
	I1204 15:45:55.340146   11419 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:45:55.340160   11419 client.go:171] duration metric: took 267.014667ms to LocalClient.Create
	I1204 15:45:57.342400   11419 start.go:128] duration metric: took 2.328346416s to createHost
	I1204 15:45:57.342450   11419 start.go:83] releasing machines lock for "flannel-667000", held for 2.328781333s
	W1204 15:45:57.342669   11419 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-667000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-667000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:45:57.357672   11419 out.go:201] 
	W1204 15:45:57.361720   11419 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 15:45:57.361750   11419 out.go:270] * 
	* 
	W1204 15:45:57.363366   11419 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 15:45:57.372759   11419 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (10.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-667000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-667000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.931666875s)

                                                
                                                
-- stdout --
	* [bridge-667000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-667000" primary control-plane node in "bridge-667000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-667000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:45:59.885339   11539 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:45:59.885499   11539 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:45:59.885502   11539 out.go:358] Setting ErrFile to fd 2...
	I1204 15:45:59.885505   11539 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:45:59.885628   11539 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:45:59.886743   11539 out.go:352] Setting JSON to false
	I1204 15:45:59.904573   11539 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6329,"bootTime":1733349630,"procs":550,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 15:45:59.904636   11539 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:45:59.911526   11539 out.go:177] * [bridge-667000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 15:45:59.919392   11539 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 15:45:59.919445   11539 notify.go:220] Checking for updates...
	I1204 15:45:59.926436   11539 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	I1204 15:45:59.929416   11539 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 15:45:59.932522   11539 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:45:59.935476   11539 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	I1204 15:45:59.936936   11539 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 15:45:59.940778   11539 config.go:182] Loaded profile config "multinode-093000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:45:59.940847   11539 config.go:182] Loaded profile config "stopped-upgrade-377000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1204 15:45:59.940894   11539 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:45:59.944527   11539 out.go:177] * Using the qemu2 driver based on user configuration
	I1204 15:45:59.950511   11539 start.go:297] selected driver: qemu2
	I1204 15:45:59.950520   11539 start.go:901] validating driver "qemu2" against <nil>
	I1204 15:45:59.950545   11539 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 15:45:59.953015   11539 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 15:45:59.957549   11539 out.go:177] * Automatically selected the socket_vmnet network
	I1204 15:45:59.959198   11539 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 15:45:59.959219   11539 cni.go:84] Creating CNI manager for "bridge"
	I1204 15:45:59.959225   11539 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1204 15:45:59.959261   11539 start.go:340] cluster config:
	{Name:bridge-667000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:bridge-667000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:45:59.963837   11539 iso.go:125] acquiring lock: {Name:mk798ab7885b095925d55c843b0600e7ea181045 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:45:59.972523   11539 out.go:177] * Starting "bridge-667000" primary control-plane node in "bridge-667000" cluster
	I1204 15:45:59.976440   11539 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:45:59.976455   11539 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 15:45:59.976463   11539 cache.go:56] Caching tarball of preloaded images
	I1204 15:45:59.976545   11539 preload.go:172] Found /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 15:45:59.976551   11539 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 15:45:59.976615   11539 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/bridge-667000/config.json ...
	I1204 15:45:59.976635   11539 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/bridge-667000/config.json: {Name:mkf0d5197d7878f75ae5c8f1a5094b4d55180c28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:45:59.977096   11539 start.go:360] acquireMachinesLock for bridge-667000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:45:59.977143   11539 start.go:364] duration metric: took 41.542µs to acquireMachinesLock for "bridge-667000"
	I1204 15:45:59.977156   11539 start.go:93] Provisioning new machine with config: &{Name:bridge-667000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:bridge-667000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:45:59.977182   11539 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 15:45:59.985468   11539 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1204 15:46:00.002933   11539 start.go:159] libmachine.API.Create for "bridge-667000" (driver="qemu2")
	I1204 15:46:00.002958   11539 client.go:168] LocalClient.Create starting
	I1204 15:46:00.003031   11539 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem
	I1204 15:46:00.003066   11539 main.go:141] libmachine: Decoding PEM data...
	I1204 15:46:00.003078   11539 main.go:141] libmachine: Parsing certificate...
	I1204 15:46:00.003114   11539 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem
	I1204 15:46:00.003145   11539 main.go:141] libmachine: Decoding PEM data...
	I1204 15:46:00.003153   11539 main.go:141] libmachine: Parsing certificate...
	I1204 15:46:00.003514   11539 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-6982/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 15:46:00.164252   11539 main.go:141] libmachine: Creating SSH key...
	I1204 15:46:00.367103   11539 main.go:141] libmachine: Creating Disk image...
	I1204 15:46:00.367117   11539 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 15:46:00.367351   11539 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/bridge-667000/disk.qcow2.raw /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/bridge-667000/disk.qcow2
	I1204 15:46:00.377625   11539 main.go:141] libmachine: STDOUT: 
	I1204 15:46:00.377641   11539 main.go:141] libmachine: STDERR: 
	I1204 15:46:00.377727   11539 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/bridge-667000/disk.qcow2 +20000M
	I1204 15:46:00.386393   11539 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 15:46:00.386410   11539 main.go:141] libmachine: STDERR: 
	I1204 15:46:00.386431   11539 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/bridge-667000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/bridge-667000/disk.qcow2
	I1204 15:46:00.386437   11539 main.go:141] libmachine: Starting QEMU VM...
	I1204 15:46:00.386450   11539 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:46:00.386481   11539 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/bridge-667000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/bridge-667000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/bridge-667000/qemu.pid -device virtio-net-pci,netdev=net0,mac=32:8f:fb:7f:45:31 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/bridge-667000/disk.qcow2
	I1204 15:46:00.388323   11539 main.go:141] libmachine: STDOUT: 
	I1204 15:46:00.388338   11539 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:46:00.388360   11539 client.go:171] duration metric: took 385.393167ms to LocalClient.Create
	I1204 15:46:02.390488   11539 start.go:128] duration metric: took 2.413268542s to createHost
	I1204 15:46:02.390567   11539 start.go:83] releasing machines lock for "bridge-667000", held for 2.413394833s
	W1204 15:46:02.390612   11539 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:46:02.399839   11539 out.go:177] * Deleting "bridge-667000" in qemu2 ...
	W1204 15:46:02.419557   11539 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:46:02.419576   11539 start.go:729] Will try again in 5 seconds ...
	I1204 15:46:07.421789   11539 start.go:360] acquireMachinesLock for bridge-667000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:46:07.422117   11539 start.go:364] duration metric: took 263.083µs to acquireMachinesLock for "bridge-667000"
	I1204 15:46:07.422160   11539 start.go:93] Provisioning new machine with config: &{Name:bridge-667000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.2 ClusterName:bridge-667000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:46:07.422314   11539 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 15:46:07.431328   11539 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1204 15:46:07.468174   11539 start.go:159] libmachine.API.Create for "bridge-667000" (driver="qemu2")
	I1204 15:46:07.468229   11539 client.go:168] LocalClient.Create starting
	I1204 15:46:07.468361   11539 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem
	I1204 15:46:07.468428   11539 main.go:141] libmachine: Decoding PEM data...
	I1204 15:46:07.468445   11539 main.go:141] libmachine: Parsing certificate...
	I1204 15:46:07.468504   11539 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem
	I1204 15:46:07.468556   11539 main.go:141] libmachine: Decoding PEM data...
	I1204 15:46:07.468575   11539 main.go:141] libmachine: Parsing certificate...
	I1204 15:46:07.469218   11539 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-6982/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 15:46:07.640309   11539 main.go:141] libmachine: Creating SSH key...
	I1204 15:46:07.719244   11539 main.go:141] libmachine: Creating Disk image...
	I1204 15:46:07.719250   11539 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 15:46:07.719450   11539 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/bridge-667000/disk.qcow2.raw /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/bridge-667000/disk.qcow2
	I1204 15:46:07.729690   11539 main.go:141] libmachine: STDOUT: 
	I1204 15:46:07.729711   11539 main.go:141] libmachine: STDERR: 
	I1204 15:46:07.729785   11539 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/bridge-667000/disk.qcow2 +20000M
	I1204 15:46:07.738414   11539 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 15:46:07.738430   11539 main.go:141] libmachine: STDERR: 
	I1204 15:46:07.738444   11539 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/bridge-667000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/bridge-667000/disk.qcow2
	I1204 15:46:07.738451   11539 main.go:141] libmachine: Starting QEMU VM...
	I1204 15:46:07.738460   11539 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:46:07.738502   11539 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/bridge-667000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/bridge-667000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/bridge-667000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:3f:5d:62:a5:03 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/bridge-667000/disk.qcow2
	I1204 15:46:07.740329   11539 main.go:141] libmachine: STDOUT: 
	I1204 15:46:07.740346   11539 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:46:07.740357   11539 client.go:171] duration metric: took 272.119084ms to LocalClient.Create
	I1204 15:46:09.742567   11539 start.go:128] duration metric: took 2.320201334s to createHost
	I1204 15:46:09.742665   11539 start.go:83] releasing machines lock for "bridge-667000", held for 2.320510666s
	W1204 15:46:09.743037   11539 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-667000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-667000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:46:09.755715   11539 out.go:201] 
	W1204 15:46:09.758772   11539 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 15:46:09.758815   11539 out.go:270] * 
	* 
	W1204 15:46:09.760903   11539 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 15:46:09.773707   11539 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-667000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-667000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.748215208s)

                                                
                                                
-- stdout --
	* [kubenet-667000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-667000" primary control-plane node in "kubenet-667000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-667000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:46:12.156777   11648 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:46:12.156953   11648 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:46:12.156956   11648 out.go:358] Setting ErrFile to fd 2...
	I1204 15:46:12.156959   11648 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:46:12.157073   11648 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:46:12.158225   11648 out.go:352] Setting JSON to false
	I1204 15:46:12.176175   11648 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6342,"bootTime":1733349630,"procs":547,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 15:46:12.176264   11648 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:46:12.183047   11648 out.go:177] * [kubenet-667000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 15:46:12.190909   11648 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 15:46:12.190982   11648 notify.go:220] Checking for updates...
	I1204 15:46:12.198039   11648 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	I1204 15:46:12.201011   11648 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 15:46:12.204079   11648 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:46:12.207056   11648 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	I1204 15:46:12.208553   11648 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 15:46:12.212354   11648 config.go:182] Loaded profile config "multinode-093000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:46:12.212423   11648 config.go:182] Loaded profile config "stopped-upgrade-377000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1204 15:46:12.212471   11648 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:46:12.216064   11648 out.go:177] * Using the qemu2 driver based on user configuration
	I1204 15:46:12.222685   11648 start.go:297] selected driver: qemu2
	I1204 15:46:12.222691   11648 start.go:901] validating driver "qemu2" against <nil>
	I1204 15:46:12.222698   11648 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 15:46:12.225070   11648 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 15:46:12.230043   11648 out.go:177] * Automatically selected the socket_vmnet network
	I1204 15:46:12.231547   11648 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 15:46:12.231564   11648 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1204 15:46:12.231602   11648 start.go:340] cluster config:
	{Name:kubenet-667000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:kubenet-667000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:46:12.236171   11648 iso.go:125] acquiring lock: {Name:mk798ab7885b095925d55c843b0600e7ea181045 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:46:12.244044   11648 out.go:177] * Starting "kubenet-667000" primary control-plane node in "kubenet-667000" cluster
	I1204 15:46:12.247999   11648 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:46:12.248014   11648 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 15:46:12.248022   11648 cache.go:56] Caching tarball of preloaded images
	I1204 15:46:12.248093   11648 preload.go:172] Found /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 15:46:12.248102   11648 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 15:46:12.248160   11648 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/kubenet-667000/config.json ...
	I1204 15:46:12.248174   11648 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/kubenet-667000/config.json: {Name:mk8931b982d935dd58854840398118f5745a37c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:46:12.248489   11648 start.go:360] acquireMachinesLock for kubenet-667000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:46:12.248537   11648 start.go:364] duration metric: took 42.709µs to acquireMachinesLock for "kubenet-667000"
	I1204 15:46:12.248549   11648 start.go:93] Provisioning new machine with config: &{Name:kubenet-667000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:kubenet-667000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:46:12.248574   11648 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 15:46:12.255981   11648 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1204 15:46:12.271558   11648 start.go:159] libmachine.API.Create for "kubenet-667000" (driver="qemu2")
	I1204 15:46:12.271587   11648 client.go:168] LocalClient.Create starting
	I1204 15:46:12.271655   11648 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem
	I1204 15:46:12.271689   11648 main.go:141] libmachine: Decoding PEM data...
	I1204 15:46:12.271703   11648 main.go:141] libmachine: Parsing certificate...
	I1204 15:46:12.271744   11648 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem
	I1204 15:46:12.271772   11648 main.go:141] libmachine: Decoding PEM data...
	I1204 15:46:12.271784   11648 main.go:141] libmachine: Parsing certificate...
	I1204 15:46:12.272177   11648 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-6982/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 15:46:12.436067   11648 main.go:141] libmachine: Creating SSH key...
	I1204 15:46:12.497821   11648 main.go:141] libmachine: Creating Disk image...
	I1204 15:46:12.497829   11648 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 15:46:12.498024   11648 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kubenet-667000/disk.qcow2.raw /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kubenet-667000/disk.qcow2
	I1204 15:46:12.507786   11648 main.go:141] libmachine: STDOUT: 
	I1204 15:46:12.507802   11648 main.go:141] libmachine: STDERR: 
	I1204 15:46:12.507851   11648 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kubenet-667000/disk.qcow2 +20000M
	I1204 15:46:12.516552   11648 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 15:46:12.516571   11648 main.go:141] libmachine: STDERR: 
	I1204 15:46:12.516590   11648 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kubenet-667000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kubenet-667000/disk.qcow2
	I1204 15:46:12.516595   11648 main.go:141] libmachine: Starting QEMU VM...
	I1204 15:46:12.516608   11648 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:46:12.516642   11648 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kubenet-667000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kubenet-667000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kubenet-667000/qemu.pid -device virtio-net-pci,netdev=net0,mac=02:85:c3:04:c5:b1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kubenet-667000/disk.qcow2
	I1204 15:46:12.518502   11648 main.go:141] libmachine: STDOUT: 
	I1204 15:46:12.518516   11648 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:46:12.518538   11648 client.go:171] duration metric: took 246.942667ms to LocalClient.Create
	I1204 15:46:14.520672   11648 start.go:128] duration metric: took 2.272058792s to createHost
	I1204 15:46:14.520731   11648 start.go:83] releasing machines lock for "kubenet-667000", held for 2.272165583s
	W1204 15:46:14.520774   11648 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:46:14.538049   11648 out.go:177] * Deleting "kubenet-667000" in qemu2 ...
	W1204 15:46:14.557589   11648 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:46:14.557606   11648 start.go:729] Will try again in 5 seconds ...
	I1204 15:46:19.559813   11648 start.go:360] acquireMachinesLock for kubenet-667000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:46:19.560163   11648 start.go:364] duration metric: took 290.375µs to acquireMachinesLock for "kubenet-667000"
	I1204 15:46:19.560230   11648 start.go:93] Provisioning new machine with config: &{Name:kubenet-667000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:kubenet-667000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:46:19.560375   11648 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 15:46:19.572728   11648 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1204 15:46:19.606250   11648 start.go:159] libmachine.API.Create for "kubenet-667000" (driver="qemu2")
	I1204 15:46:19.606297   11648 client.go:168] LocalClient.Create starting
	I1204 15:46:19.606413   11648 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem
	I1204 15:46:19.606490   11648 main.go:141] libmachine: Decoding PEM data...
	I1204 15:46:19.606507   11648 main.go:141] libmachine: Parsing certificate...
	I1204 15:46:19.606575   11648 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem
	I1204 15:46:19.606626   11648 main.go:141] libmachine: Decoding PEM data...
	I1204 15:46:19.606642   11648 main.go:141] libmachine: Parsing certificate...
	I1204 15:46:19.607163   11648 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-6982/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 15:46:19.775283   11648 main.go:141] libmachine: Creating SSH key...
	I1204 15:46:19.803881   11648 main.go:141] libmachine: Creating Disk image...
	I1204 15:46:19.803887   11648 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 15:46:19.804080   11648 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kubenet-667000/disk.qcow2.raw /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kubenet-667000/disk.qcow2
	I1204 15:46:19.813923   11648 main.go:141] libmachine: STDOUT: 
	I1204 15:46:19.813949   11648 main.go:141] libmachine: STDERR: 
	I1204 15:46:19.814009   11648 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kubenet-667000/disk.qcow2 +20000M
	I1204 15:46:19.822790   11648 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 15:46:19.822803   11648 main.go:141] libmachine: STDERR: 
	I1204 15:46:19.822821   11648 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kubenet-667000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kubenet-667000/disk.qcow2
	I1204 15:46:19.822826   11648 main.go:141] libmachine: Starting QEMU VM...
	I1204 15:46:19.822836   11648 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:46:19.822863   11648 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kubenet-667000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kubenet-667000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kubenet-667000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:d7:e2:10:59:ad -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/kubenet-667000/disk.qcow2
	I1204 15:46:19.824691   11648 main.go:141] libmachine: STDOUT: 
	I1204 15:46:19.824702   11648 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:46:19.824716   11648 client.go:171] duration metric: took 218.409708ms to LocalClient.Create
	I1204 15:46:21.826937   11648 start.go:128] duration metric: took 2.266509959s to createHost
	I1204 15:46:21.827017   11648 start.go:83] releasing machines lock for "kubenet-667000", held for 2.266817375s
	W1204 15:46:21.827329   11648 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-667000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-667000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:46:21.837255   11648 out.go:201] 
	W1204 15:46:21.845362   11648 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 15:46:21.845404   11648 out.go:270] * 
	* 
	W1204 15:46:21.848025   11648 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 15:46:21.858255   11648 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (10.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-105000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-105000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (10.029313875s)

                                                
                                                
-- stdout --
	* [old-k8s-version-105000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-105000" primary control-plane node in "old-k8s-version-105000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-105000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:46:24.254876   11765 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:46:24.255034   11765 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:46:24.255037   11765 out.go:358] Setting ErrFile to fd 2...
	I1204 15:46:24.255040   11765 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:46:24.255177   11765 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:46:24.256415   11765 out.go:352] Setting JSON to false
	I1204 15:46:24.274245   11765 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6354,"bootTime":1733349630,"procs":550,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 15:46:24.274318   11765 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:46:24.281175   11765 out.go:177] * [old-k8s-version-105000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 15:46:24.288191   11765 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 15:46:24.288224   11765 notify.go:220] Checking for updates...
	I1204 15:46:24.297763   11765 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	I1204 15:46:24.301161   11765 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 15:46:24.304156   11765 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:46:24.307130   11765 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	I1204 15:46:24.310119   11765 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 15:46:24.313491   11765 config.go:182] Loaded profile config "multinode-093000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:46:24.313566   11765 config.go:182] Loaded profile config "stopped-upgrade-377000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1204 15:46:24.313616   11765 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:46:24.317088   11765 out.go:177] * Using the qemu2 driver based on user configuration
	I1204 15:46:24.324103   11765 start.go:297] selected driver: qemu2
	I1204 15:46:24.324109   11765 start.go:901] validating driver "qemu2" against <nil>
	I1204 15:46:24.324116   11765 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 15:46:24.326672   11765 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 15:46:24.330173   11765 out.go:177] * Automatically selected the socket_vmnet network
	I1204 15:46:24.333273   11765 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 15:46:24.333289   11765 cni.go:84] Creating CNI manager for ""
	I1204 15:46:24.333311   11765 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1204 15:46:24.333342   11765 start.go:340] cluster config:
	{Name:old-k8s-version-105000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-105000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:46:24.338113   11765 iso.go:125] acquiring lock: {Name:mk798ab7885b095925d55c843b0600e7ea181045 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:46:24.346152   11765 out.go:177] * Starting "old-k8s-version-105000" primary control-plane node in "old-k8s-version-105000" cluster
	I1204 15:46:24.350145   11765 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1204 15:46:24.350164   11765 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1204 15:46:24.350177   11765 cache.go:56] Caching tarball of preloaded images
	I1204 15:46:24.350270   11765 preload.go:172] Found /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 15:46:24.350276   11765 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1204 15:46:24.350331   11765 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/old-k8s-version-105000/config.json ...
	I1204 15:46:24.350343   11765 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/old-k8s-version-105000/config.json: {Name:mk1a9dfe1698ae45dc1be64b059b3bb1fc87fef4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:46:24.350786   11765 start.go:360] acquireMachinesLock for old-k8s-version-105000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:46:24.350840   11765 start.go:364] duration metric: took 44.792µs to acquireMachinesLock for "old-k8s-version-105000"
	I1204 15:46:24.350854   11765 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-105000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-105000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:46:24.350881   11765 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 15:46:24.359100   11765 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 15:46:24.376394   11765 start.go:159] libmachine.API.Create for "old-k8s-version-105000" (driver="qemu2")
	I1204 15:46:24.376427   11765 client.go:168] LocalClient.Create starting
	I1204 15:46:24.376508   11765 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem
	I1204 15:46:24.376556   11765 main.go:141] libmachine: Decoding PEM data...
	I1204 15:46:24.376572   11765 main.go:141] libmachine: Parsing certificate...
	I1204 15:46:24.376611   11765 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem
	I1204 15:46:24.376642   11765 main.go:141] libmachine: Decoding PEM data...
	I1204 15:46:24.376650   11765 main.go:141] libmachine: Parsing certificate...
	I1204 15:46:24.377043   11765 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-6982/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 15:46:24.537555   11765 main.go:141] libmachine: Creating SSH key...
	I1204 15:46:24.765503   11765 main.go:141] libmachine: Creating Disk image...
	I1204 15:46:24.765515   11765 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 15:46:24.765786   11765 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/old-k8s-version-105000/disk.qcow2.raw /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/old-k8s-version-105000/disk.qcow2
	I1204 15:46:24.776415   11765 main.go:141] libmachine: STDOUT: 
	I1204 15:46:24.776443   11765 main.go:141] libmachine: STDERR: 
	I1204 15:46:24.776508   11765 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/old-k8s-version-105000/disk.qcow2 +20000M
	I1204 15:46:24.785229   11765 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 15:46:24.785244   11765 main.go:141] libmachine: STDERR: 
	I1204 15:46:24.785265   11765 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/old-k8s-version-105000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/old-k8s-version-105000/disk.qcow2
	I1204 15:46:24.785272   11765 main.go:141] libmachine: Starting QEMU VM...
	I1204 15:46:24.785282   11765 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:46:24.785315   11765 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/old-k8s-version-105000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/old-k8s-version-105000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/old-k8s-version-105000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:60:91:07:ad:8a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/old-k8s-version-105000/disk.qcow2
	I1204 15:46:24.787145   11765 main.go:141] libmachine: STDOUT: 
	I1204 15:46:24.787169   11765 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:46:24.787193   11765 client.go:171] duration metric: took 410.755542ms to LocalClient.Create
	I1204 15:46:26.788304   11765 start.go:128] duration metric: took 2.437385208s to createHost
	I1204 15:46:26.788366   11765 start.go:83] releasing machines lock for "old-k8s-version-105000", held for 2.437495834s
	W1204 15:46:26.788408   11765 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:46:26.798666   11765 out.go:177] * Deleting "old-k8s-version-105000" in qemu2 ...
	W1204 15:46:26.817760   11765 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:46:26.817775   11765 start.go:729] Will try again in 5 seconds ...
	I1204 15:46:31.818993   11765 start.go:360] acquireMachinesLock for old-k8s-version-105000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:46:31.819761   11765 start.go:364] duration metric: took 670.083µs to acquireMachinesLock for "old-k8s-version-105000"
	I1204 15:46:31.819832   11765 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-105000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-105000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:46:31.820122   11765 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 15:46:31.829693   11765 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 15:46:31.874928   11765 start.go:159] libmachine.API.Create for "old-k8s-version-105000" (driver="qemu2")
	I1204 15:46:31.874986   11765 client.go:168] LocalClient.Create starting
	I1204 15:46:31.875159   11765 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem
	I1204 15:46:31.875242   11765 main.go:141] libmachine: Decoding PEM data...
	I1204 15:46:31.875256   11765 main.go:141] libmachine: Parsing certificate...
	I1204 15:46:31.875324   11765 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem
	I1204 15:46:31.875380   11765 main.go:141] libmachine: Decoding PEM data...
	I1204 15:46:31.875390   11765 main.go:141] libmachine: Parsing certificate...
	I1204 15:46:31.875975   11765 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-6982/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 15:46:32.044414   11765 main.go:141] libmachine: Creating SSH key...
	I1204 15:46:32.181697   11765 main.go:141] libmachine: Creating Disk image...
	I1204 15:46:32.181704   11765 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 15:46:32.181925   11765 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/old-k8s-version-105000/disk.qcow2.raw /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/old-k8s-version-105000/disk.qcow2
	I1204 15:46:32.192216   11765 main.go:141] libmachine: STDOUT: 
	I1204 15:46:32.192242   11765 main.go:141] libmachine: STDERR: 
	I1204 15:46:32.192299   11765 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/old-k8s-version-105000/disk.qcow2 +20000M
	I1204 15:46:32.201109   11765 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 15:46:32.201127   11765 main.go:141] libmachine: STDERR: 
	I1204 15:46:32.201139   11765 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/old-k8s-version-105000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/old-k8s-version-105000/disk.qcow2
	I1204 15:46:32.201145   11765 main.go:141] libmachine: Starting QEMU VM...
	I1204 15:46:32.201159   11765 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:46:32.201186   11765 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/old-k8s-version-105000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/old-k8s-version-105000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/old-k8s-version-105000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:0d:4d:a8:88:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/old-k8s-version-105000/disk.qcow2
	I1204 15:46:32.203155   11765 main.go:141] libmachine: STDOUT: 
	I1204 15:46:32.203169   11765 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:46:32.203182   11765 client.go:171] duration metric: took 328.1885ms to LocalClient.Create
	I1204 15:46:34.205550   11765 start.go:128] duration metric: took 2.385240958s to createHost
	I1204 15:46:34.205638   11765 start.go:83] releasing machines lock for "old-k8s-version-105000", held for 2.385830417s
	W1204 15:46:34.205996   11765 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-105000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-105000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:46:34.219799   11765 out.go:201] 
	W1204 15:46:34.224934   11765 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 15:46:34.224984   11765 out.go:270] * 
	* 
	W1204 15:46:34.227902   11765 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 15:46:34.241759   11765 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-105000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-105000 -n old-k8s-version-105000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-105000 -n old-k8s-version-105000: exit status 7 (71.463084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-105000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (10.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-105000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-105000 create -f testdata/busybox.yaml: exit status 1 (28.792167ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-105000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-105000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-105000 -n old-k8s-version-105000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-105000 -n old-k8s-version-105000: exit status 7 (34.1985ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-105000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-105000 -n old-k8s-version-105000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-105000 -n old-k8s-version-105000: exit status 7 (33.036083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-105000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-105000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-105000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-105000 describe deploy/metrics-server -n kube-system: exit status 1 (27.639666ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-105000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-105000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-105000 -n old-k8s-version-105000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-105000 -n old-k8s-version-105000: exit status 7 (34.832917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-105000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-105000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-105000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.206332667s)

                                                
                                                
-- stdout --
	* [old-k8s-version-105000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-105000" primary control-plane node in "old-k8s-version-105000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-105000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-105000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:46:36.806943   11819 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:46:36.807111   11819 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:46:36.807115   11819 out.go:358] Setting ErrFile to fd 2...
	I1204 15:46:36.807117   11819 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:46:36.807262   11819 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:46:36.808440   11819 out.go:352] Setting JSON to false
	I1204 15:46:36.826803   11819 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6366,"bootTime":1733349630,"procs":552,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 15:46:36.826871   11819 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:46:36.831747   11819 out.go:177] * [old-k8s-version-105000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 15:46:36.846483   11819 notify.go:220] Checking for updates...
	I1204 15:46:36.850790   11819 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 15:46:36.853774   11819 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	I1204 15:46:36.856812   11819 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 15:46:36.859910   11819 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:46:36.861438   11819 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	I1204 15:46:36.864771   11819 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 15:46:36.868089   11819 config.go:182] Loaded profile config "old-k8s-version-105000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1204 15:46:36.871804   11819 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1204 15:46:36.874790   11819 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:46:36.878785   11819 out.go:177] * Using the qemu2 driver based on existing profile
	I1204 15:46:36.885801   11819 start.go:297] selected driver: qemu2
	I1204 15:46:36.885808   11819 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-105000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-105000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:46:36.885865   11819 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 15:46:36.888394   11819 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 15:46:36.888414   11819 cni.go:84] Creating CNI manager for ""
	I1204 15:46:36.888437   11819 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1204 15:46:36.888460   11819 start.go:340] cluster config:
	{Name:old-k8s-version-105000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-105000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:46:36.892660   11819 iso.go:125] acquiring lock: {Name:mk798ab7885b095925d55c843b0600e7ea181045 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:46:36.902777   11819 out.go:177] * Starting "old-k8s-version-105000" primary control-plane node in "old-k8s-version-105000" cluster
	I1204 15:46:36.907857   11819 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1204 15:46:36.907871   11819 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1204 15:46:36.907885   11819 cache.go:56] Caching tarball of preloaded images
	I1204 15:46:36.907955   11819 preload.go:172] Found /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 15:46:36.907960   11819 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1204 15:46:36.908006   11819 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/old-k8s-version-105000/config.json ...
	I1204 15:46:36.908351   11819 start.go:360] acquireMachinesLock for old-k8s-version-105000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:46:36.908382   11819 start.go:364] duration metric: took 22.458µs to acquireMachinesLock for "old-k8s-version-105000"
	I1204 15:46:36.908392   11819 start.go:96] Skipping create...Using existing machine configuration
	I1204 15:46:36.908395   11819 fix.go:54] fixHost starting: 
	I1204 15:46:36.908506   11819 fix.go:112] recreateIfNeeded on old-k8s-version-105000: state=Stopped err=<nil>
	W1204 15:46:36.908513   11819 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 15:46:36.912747   11819 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-105000" ...
	I1204 15:46:36.919843   11819 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:46:36.919883   11819 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/old-k8s-version-105000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/old-k8s-version-105000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/old-k8s-version-105000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:0d:4d:a8:88:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/old-k8s-version-105000/disk.qcow2
	I1204 15:46:36.922150   11819 main.go:141] libmachine: STDOUT: 
	I1204 15:46:36.922168   11819 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:46:36.922198   11819 fix.go:56] duration metric: took 13.802042ms for fixHost
	I1204 15:46:36.922204   11819 start.go:83] releasing machines lock for "old-k8s-version-105000", held for 13.816708ms
	W1204 15:46:36.922211   11819 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 15:46:36.922242   11819 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:46:36.922246   11819 start.go:729] Will try again in 5 seconds ...
	I1204 15:46:41.924353   11819 start.go:360] acquireMachinesLock for old-k8s-version-105000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:46:41.924449   11819 start.go:364] duration metric: took 77.5µs to acquireMachinesLock for "old-k8s-version-105000"
	I1204 15:46:41.924464   11819 start.go:96] Skipping create...Using existing machine configuration
	I1204 15:46:41.924468   11819 fix.go:54] fixHost starting: 
	I1204 15:46:41.924616   11819 fix.go:112] recreateIfNeeded on old-k8s-version-105000: state=Stopped err=<nil>
	W1204 15:46:41.924622   11819 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 15:46:41.933074   11819 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-105000" ...
	I1204 15:46:41.940022   11819 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:46:41.940079   11819 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/old-k8s-version-105000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/old-k8s-version-105000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/old-k8s-version-105000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:0d:4d:a8:88:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/old-k8s-version-105000/disk.qcow2
	I1204 15:46:41.942381   11819 main.go:141] libmachine: STDOUT: 
	I1204 15:46:41.942395   11819 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:46:41.942415   11819 fix.go:56] duration metric: took 17.946375ms for fixHost
	I1204 15:46:41.942420   11819 start.go:83] releasing machines lock for "old-k8s-version-105000", held for 17.965459ms
	W1204 15:46:41.942481   11819 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-105000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-105000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:46:41.949135   11819 out.go:201] 
	W1204 15:46:41.957087   11819 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 15:46:41.957097   11819 out.go:270] * 
	* 
	W1204 15:46:41.957571   11819 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 15:46:41.969133   11819 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-105000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-105000 -n old-k8s-version-105000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-105000 -n old-k8s-version-105000: exit status 7 (38.033833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-105000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-105000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-105000 -n old-k8s-version-105000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-105000 -n old-k8s-version-105000: exit status 7 (36.018792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-105000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-105000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-105000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-105000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (29.93325ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-105000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-105000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-105000 -n old-k8s-version-105000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-105000 -n old-k8s-version-105000: exit status 7 (33.828333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-105000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-105000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-105000 -n old-k8s-version-105000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-105000 -n old-k8s-version-105000: exit status 7 (33.356625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-105000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-105000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-105000 --alsologtostderr -v=1: exit status 83 (43.688542ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-105000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-105000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:46:42.230380   11844 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:46:42.231299   11844 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:46:42.231303   11844 out.go:358] Setting ErrFile to fd 2...
	I1204 15:46:42.231306   11844 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:46:42.231436   11844 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:46:42.231625   11844 out.go:352] Setting JSON to false
	I1204 15:46:42.231635   11844 mustload.go:65] Loading cluster: old-k8s-version-105000
	I1204 15:46:42.231854   11844 config.go:182] Loaded profile config "old-k8s-version-105000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1204 15:46:42.236718   11844 out.go:177] * The control-plane node old-k8s-version-105000 host is not running: state=Stopped
	I1204 15:46:42.240655   11844 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-105000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-105000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-105000 -n old-k8s-version-105000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-105000 -n old-k8s-version-105000: exit status 7 (35.294542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-105000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-105000 -n old-k8s-version-105000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-105000 -n old-k8s-version-105000: exit status 7 (33.5635ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-105000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-756000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-756000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (9.845124041s)

                                                
                                                
-- stdout --
	* [no-preload-756000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-756000" primary control-plane node in "no-preload-756000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-756000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:46:42.584510   11861 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:46:42.584676   11861 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:46:42.584681   11861 out.go:358] Setting ErrFile to fd 2...
	I1204 15:46:42.584683   11861 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:46:42.584811   11861 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:46:42.585983   11861 out.go:352] Setting JSON to false
	I1204 15:46:42.603987   11861 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6372,"bootTime":1733349630,"procs":547,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 15:46:42.604101   11861 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:46:42.607807   11861 out.go:177] * [no-preload-756000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 15:46:42.613754   11861 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 15:46:42.613795   11861 notify.go:220] Checking for updates...
	I1204 15:46:42.620611   11861 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	I1204 15:46:42.623639   11861 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 15:46:42.626686   11861 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:46:42.629656   11861 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	I1204 15:46:42.632651   11861 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 15:46:42.634557   11861 config.go:182] Loaded profile config "multinode-093000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:46:42.634621   11861 config.go:182] Loaded profile config "stopped-upgrade-377000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1204 15:46:42.634668   11861 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:46:42.637632   11861 out.go:177] * Using the qemu2 driver based on user configuration
	I1204 15:46:42.644492   11861 start.go:297] selected driver: qemu2
	I1204 15:46:42.644500   11861 start.go:901] validating driver "qemu2" against <nil>
	I1204 15:46:42.644508   11861 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 15:46:42.647006   11861 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 15:46:42.650646   11861 out.go:177] * Automatically selected the socket_vmnet network
	I1204 15:46:42.654724   11861 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 15:46:42.654753   11861 cni.go:84] Creating CNI manager for ""
	I1204 15:46:42.654782   11861 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 15:46:42.654787   11861 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1204 15:46:42.654824   11861 start.go:340] cluster config:
	{Name:no-preload-756000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:no-preload-756000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:46:42.659324   11861 iso.go:125] acquiring lock: {Name:mk798ab7885b095925d55c843b0600e7ea181045 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:46:42.667621   11861 out.go:177] * Starting "no-preload-756000" primary control-plane node in "no-preload-756000" cluster
	I1204 15:46:42.671646   11861 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:46:42.671706   11861 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/no-preload-756000/config.json ...
	I1204 15:46:42.671720   11861 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/no-preload-756000/config.json: {Name:mk01a1ca9a48fc55d35c6f353c64a2ced9f85993 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:46:42.671725   11861 cache.go:107] acquiring lock: {Name:mk9712d720fae27f2f0a6ce3ab433c89f4aa709d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:46:42.671728   11861 cache.go:107] acquiring lock: {Name:mke9bfe86d065dcb91fa7a419ea8c05899d7cdd6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:46:42.671828   11861 cache.go:115] /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1204 15:46:42.671837   11861 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 114.083µs
	I1204 15:46:42.671843   11861 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1204 15:46:42.671850   11861 cache.go:107] acquiring lock: {Name:mk37c35654ada8bb19c81558e316842ee651aaea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:46:42.671871   11861 cache.go:107] acquiring lock: {Name:mkd1e01eaa83b98db56595112255e65611b5bfd0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:46:42.671893   11861 cache.go:107] acquiring lock: {Name:mkfa03113898c68e2ead1b60cd9f17e206d40735 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:46:42.671909   11861 cache.go:107] acquiring lock: {Name:mka30657e2616e9e8c368c2887c9fc294e068ec5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:46:42.671931   11861 cache.go:107] acquiring lock: {Name:mk5d4b2dc783fb7537d57760f206c366231b7abf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:46:42.671902   11861 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 15:46:42.671916   11861 cache.go:107] acquiring lock: {Name:mk43df0665f6ac26e61a01088c274335724d7957 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:46:42.672021   11861 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1204 15:46:42.672177   11861 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1204 15:46:42.672203   11861 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1204 15:46:42.672233   11861 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1204 15:46:42.672309   11861 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1204 15:46:42.672310   11861 start.go:360] acquireMachinesLock for no-preload-756000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:46:42.672327   11861 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1204 15:46:42.672365   11861 start.go:364] duration metric: took 49.084µs to acquireMachinesLock for "no-preload-756000"
	I1204 15:46:42.672380   11861 start.go:93] Provisioning new machine with config: &{Name:no-preload-756000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:no-preload-756000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:46:42.672419   11861 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 15:46:42.680590   11861 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 15:46:42.685547   11861 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1204 15:46:42.685628   11861 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1204 15:46:42.685658   11861 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1204 15:46:42.685702   11861 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1204 15:46:42.686735   11861 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1204 15:46:42.686884   11861 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1204 15:46:42.687191   11861 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1204 15:46:42.697492   11861 start.go:159] libmachine.API.Create for "no-preload-756000" (driver="qemu2")
	I1204 15:46:42.697510   11861 client.go:168] LocalClient.Create starting
	I1204 15:46:42.697595   11861 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem
	I1204 15:46:42.697631   11861 main.go:141] libmachine: Decoding PEM data...
	I1204 15:46:42.697654   11861 main.go:141] libmachine: Parsing certificate...
	I1204 15:46:42.697694   11861 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem
	I1204 15:46:42.697722   11861 main.go:141] libmachine: Decoding PEM data...
	I1204 15:46:42.697737   11861 main.go:141] libmachine: Parsing certificate...
	I1204 15:46:42.698081   11861 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-6982/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 15:46:42.862979   11861 main.go:141] libmachine: Creating SSH key...
	I1204 15:46:42.920570   11861 main.go:141] libmachine: Creating Disk image...
	I1204 15:46:42.920590   11861 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 15:46:42.920815   11861 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/no-preload-756000/disk.qcow2.raw /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/no-preload-756000/disk.qcow2
	I1204 15:46:42.931877   11861 main.go:141] libmachine: STDOUT: 
	I1204 15:46:42.931901   11861 main.go:141] libmachine: STDERR: 
	I1204 15:46:42.931968   11861 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/no-preload-756000/disk.qcow2 +20000M
	I1204 15:46:42.941344   11861 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 15:46:42.941368   11861 main.go:141] libmachine: STDERR: 
	I1204 15:46:42.941388   11861 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/no-preload-756000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/no-preload-756000/disk.qcow2
	I1204 15:46:42.941394   11861 main.go:141] libmachine: Starting QEMU VM...
	I1204 15:46:42.941413   11861 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:46:42.941448   11861 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/no-preload-756000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/no-preload-756000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/no-preload-756000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:88:f1:94:3d:9c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/no-preload-756000/disk.qcow2
	I1204 15:46:42.943713   11861 main.go:141] libmachine: STDOUT: 
	I1204 15:46:42.943725   11861 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:46:42.943740   11861 client.go:171] duration metric: took 246.221875ms to LocalClient.Create
	I1204 15:46:43.128462   11861 cache.go:162] opening:  /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3
	I1204 15:46:43.141994   11861 cache.go:162] opening:  /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I1204 15:46:43.188395   11861 cache.go:162] opening:  /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2
	I1204 15:46:43.245965   11861 cache.go:162] opening:  /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1204 15:46:43.335223   11861 cache.go:157] /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1204 15:46:43.335237   11861 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 663.389334ms
	I1204 15:46:43.335243   11861 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1204 15:46:43.348781   11861 cache.go:162] opening:  /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2
	I1204 15:46:43.358741   11861 cache.go:162] opening:  /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2
	I1204 15:46:43.456967   11861 cache.go:162] opening:  /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I1204 15:46:44.944064   11861 start.go:128] duration metric: took 2.271588125s to createHost
	I1204 15:46:44.944130   11861 start.go:83] releasing machines lock for "no-preload-756000", held for 2.271734458s
	W1204 15:46:44.944189   11861 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:46:44.960349   11861 out.go:177] * Deleting "no-preload-756000" in qemu2 ...
	W1204 15:46:44.991919   11861 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:46:44.991954   11861 start.go:729] Will try again in 5 seconds ...
	I1204 15:46:46.771439   11861 cache.go:157] /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1204 15:46:46.771497   11861 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 4.099604791s
	I1204 15:46:46.771519   11861 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1204 15:46:47.131439   11861 cache.go:157] /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 exists
	I1204 15:46:47.131483   11861 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.2" -> "/Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2" took 4.459571667s
	I1204 15:46:47.131507   11861 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.2 -> /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 succeeded
	I1204 15:46:47.461543   11861 cache.go:157] /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 exists
	I1204 15:46:47.461570   11861 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.2" -> "/Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2" took 4.789609375s
	I1204 15:46:47.461582   11861 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.2 -> /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 succeeded
	I1204 15:46:47.843743   11861 cache.go:157] /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 exists
	I1204 15:46:47.843779   11861 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.2" -> "/Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2" took 5.172010333s
	I1204 15:46:47.843791   11861 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.2 -> /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 succeeded
	I1204 15:46:48.212798   11861 cache.go:157] /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 exists
	I1204 15:46:48.212836   11861 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.2" -> "/Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2" took 5.540851875s
	I1204 15:46:48.212854   11861 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.2 -> /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 succeeded
	I1204 15:46:49.992244   11861 start.go:360] acquireMachinesLock for no-preload-756000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:46:49.992508   11861 start.go:364] duration metric: took 232.417µs to acquireMachinesLock for "no-preload-756000"
	I1204 15:46:49.992569   11861 start.go:93] Provisioning new machine with config: &{Name:no-preload-756000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:no-preload-756000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:46:49.992655   11861 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 15:46:50.003169   11861 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 15:46:50.032879   11861 start.go:159] libmachine.API.Create for "no-preload-756000" (driver="qemu2")
	I1204 15:46:50.032946   11861 client.go:168] LocalClient.Create starting
	I1204 15:46:50.033079   11861 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem
	I1204 15:46:50.033160   11861 main.go:141] libmachine: Decoding PEM data...
	I1204 15:46:50.033185   11861 main.go:141] libmachine: Parsing certificate...
	I1204 15:46:50.033254   11861 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem
	I1204 15:46:50.033300   11861 main.go:141] libmachine: Decoding PEM data...
	I1204 15:46:50.033313   11861 main.go:141] libmachine: Parsing certificate...
	I1204 15:46:50.033806   11861 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-6982/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 15:46:50.197883   11861 main.go:141] libmachine: Creating SSH key...
	I1204 15:46:50.341039   11861 main.go:141] libmachine: Creating Disk image...
	I1204 15:46:50.341050   11861 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 15:46:50.341270   11861 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/no-preload-756000/disk.qcow2.raw /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/no-preload-756000/disk.qcow2
	I1204 15:46:50.351773   11861 main.go:141] libmachine: STDOUT: 
	I1204 15:46:50.351797   11861 main.go:141] libmachine: STDERR: 
	I1204 15:46:50.351873   11861 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/no-preload-756000/disk.qcow2 +20000M
	I1204 15:46:50.361048   11861 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 15:46:50.361065   11861 main.go:141] libmachine: STDERR: 
	I1204 15:46:50.361080   11861 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/no-preload-756000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/no-preload-756000/disk.qcow2
	I1204 15:46:50.361087   11861 main.go:141] libmachine: Starting QEMU VM...
	I1204 15:46:50.361096   11861 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:46:50.361139   11861 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/no-preload-756000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/no-preload-756000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/no-preload-756000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:b0:36:d1:c3:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/no-preload-756000/disk.qcow2
	I1204 15:46:50.363239   11861 main.go:141] libmachine: STDOUT: 
	I1204 15:46:50.363254   11861 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:46:50.363270   11861 client.go:171] duration metric: took 330.30825ms to LocalClient.Create
	I1204 15:46:52.363477   11861 start.go:128] duration metric: took 2.370787667s to createHost
	I1204 15:46:52.363498   11861 start.go:83] releasing machines lock for "no-preload-756000", held for 2.3709565s
	W1204 15:46:52.363581   11861 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-756000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-756000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:46:52.372757   11861 out.go:201] 
	W1204 15:46:52.378850   11861 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 15:46:52.378858   11861 out.go:270] * 
	* 
	W1204 15:46:52.379359   11861 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 15:46:52.390734   11861 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-756000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-756000 -n no-preload-756000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-756000 -n no-preload-756000: exit status 7 (34.704292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-756000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-756000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-756000 create -f testdata/busybox.yaml: exit status 1 (28.074291ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-756000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-756000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-756000 -n no-preload-756000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-756000 -n no-preload-756000: exit status 7 (34.294834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-756000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-756000 -n no-preload-756000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-756000 -n no-preload-756000: exit status 7 (33.513875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-756000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-756000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-756000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-756000 describe deploy/metrics-server -n kube-system: exit status 1 (27.330959ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-756000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-756000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-756000 -n no-preload-756000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-756000 -n no-preload-756000: exit status 7 (33.517ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-756000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-756000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-756000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (5.750386416s)

                                                
                                                
-- stdout --
	* [no-preload-756000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-756000" primary control-plane node in "no-preload-756000" cluster
	* Restarting existing qemu2 VM for "no-preload-756000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-756000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:46:54.647221   11932 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:46:54.647389   11932 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:46:54.647392   11932 out.go:358] Setting ErrFile to fd 2...
	I1204 15:46:54.647395   11932 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:46:54.647533   11932 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:46:54.648631   11932 out.go:352] Setting JSON to false
	I1204 15:46:54.666694   11932 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6384,"bootTime":1733349630,"procs":546,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 15:46:54.666772   11932 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:46:54.672154   11932 out.go:177] * [no-preload-756000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 15:46:54.679052   11932 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 15:46:54.679105   11932 notify.go:220] Checking for updates...
	I1204 15:46:54.686061   11932 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	I1204 15:46:54.689437   11932 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 15:46:54.692074   11932 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:46:54.693357   11932 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	I1204 15:46:54.696099   11932 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 15:46:54.699404   11932 config.go:182] Loaded profile config "no-preload-756000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:46:54.699650   11932 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:46:54.701386   11932 out.go:177] * Using the qemu2 driver based on existing profile
	I1204 15:46:54.708091   11932 start.go:297] selected driver: qemu2
	I1204 15:46:54.708100   11932 start.go:901] validating driver "qemu2" against &{Name:no-preload-756000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:no-preload-756000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:46:54.708147   11932 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 15:46:54.710641   11932 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 15:46:54.710662   11932 cni.go:84] Creating CNI manager for ""
	I1204 15:46:54.710682   11932 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 15:46:54.710710   11932 start.go:340] cluster config:
	{Name:no-preload-756000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:no-preload-756000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:46:54.714847   11932 iso.go:125] acquiring lock: {Name:mk798ab7885b095925d55c843b0600e7ea181045 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:46:54.723105   11932 out.go:177] * Starting "no-preload-756000" primary control-plane node in "no-preload-756000" cluster
	I1204 15:46:54.727055   11932 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:46:54.727129   11932 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/no-preload-756000/config.json ...
	I1204 15:46:54.727162   11932 cache.go:107] acquiring lock: {Name:mk5d4b2dc783fb7537d57760f206c366231b7abf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:46:54.727184   11932 cache.go:107] acquiring lock: {Name:mkfa03113898c68e2ead1b60cd9f17e206d40735 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:46:54.727239   11932 cache.go:115] /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 exists
	I1204 15:46:54.727245   11932 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.2" -> "/Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2" took 88.084µs
	I1204 15:46:54.727251   11932 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.2 -> /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.2 succeeded
	I1204 15:46:54.727259   11932 cache.go:115] /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1204 15:46:54.727265   11932 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 81.292µs
	I1204 15:46:54.727273   11932 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1204 15:46:54.727256   11932 cache.go:107] acquiring lock: {Name:mka30657e2616e9e8c368c2887c9fc294e068ec5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:46:54.727283   11932 cache.go:107] acquiring lock: {Name:mk9712d720fae27f2f0a6ce3ab433c89f4aa709d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:46:54.727267   11932 cache.go:107] acquiring lock: {Name:mk43df0665f6ac26e61a01088c274335724d7957 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:46:54.727161   11932 cache.go:107] acquiring lock: {Name:mke9bfe86d065dcb91fa7a419ea8c05899d7cdd6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:46:54.727349   11932 cache.go:107] acquiring lock: {Name:mkd1e01eaa83b98db56595112255e65611b5bfd0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:46:54.727398   11932 cache.go:107] acquiring lock: {Name:mk37c35654ada8bb19c81558e316842ee651aaea Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:46:54.727406   11932 cache.go:115] /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 exists
	I1204 15:46:54.727448   11932 cache.go:115] /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 exists
	I1204 15:46:54.727413   11932 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.2" -> "/Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2" took 155.792µs
	I1204 15:46:54.727477   11932 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.2 -> /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.2 succeeded
	I1204 15:46:54.727483   11932 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.2" -> "/Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2" took 256.375µs
	I1204 15:46:54.727499   11932 cache.go:115] /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1204 15:46:54.727506   11932 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 140.084µs
	I1204 15:46:54.727513   11932 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1204 15:46:54.727510   11932 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.2 -> /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.2 succeeded
	I1204 15:46:54.727482   11932 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1204 15:46:54.727571   11932 cache.go:115] /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1204 15:46:54.727578   11932 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 422.125µs
	I1204 15:46:54.727583   11932 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1204 15:46:54.727600   11932 start.go:360] acquireMachinesLock for no-preload-756000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:46:54.727617   11932 cache.go:115] /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 exists
	I1204 15:46:54.727625   11932 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.2" -> "/Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2" took 391.375µs
	I1204 15:46:54.727629   11932 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.2 -> /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.2 succeeded
	I1204 15:46:54.727632   11932 start.go:364] duration metric: took 25.375µs to acquireMachinesLock for "no-preload-756000"
	I1204 15:46:54.727643   11932 start.go:96] Skipping create...Using existing machine configuration
	I1204 15:46:54.727647   11932 fix.go:54] fixHost starting: 
	I1204 15:46:54.727757   11932 fix.go:112] recreateIfNeeded on no-preload-756000: state=Stopped err=<nil>
	W1204 15:46:54.727764   11932 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 15:46:54.736051   11932 out.go:177] * Restarting existing qemu2 VM for "no-preload-756000" ...
	I1204 15:46:54.740058   11932 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:46:54.740089   11932 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/no-preload-756000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/no-preload-756000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/no-preload-756000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:b0:36:d1:c3:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/no-preload-756000/disk.qcow2
	I1204 15:46:54.740578   11932 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1204 15:46:54.742311   11932 main.go:141] libmachine: STDOUT: 
	I1204 15:46:54.742333   11932 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:46:54.742358   11932 fix.go:56] duration metric: took 14.709583ms for fixHost
	I1204 15:46:54.742361   11932 start.go:83] releasing machines lock for "no-preload-756000", held for 14.723166ms
	W1204 15:46:54.742367   11932 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 15:46:54.742415   11932 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:46:54.742420   11932 start.go:729] Will try again in 5 seconds ...
	I1204 15:46:55.184780   11932 cache.go:162] opening:  /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I1204 15:46:59.742969   11932 start.go:360] acquireMachinesLock for no-preload-756000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:47:00.269992   11932 start.go:364] duration metric: took 526.858792ms to acquireMachinesLock for "no-preload-756000"
	I1204 15:47:00.270126   11932 start.go:96] Skipping create...Using existing machine configuration
	I1204 15:47:00.270150   11932 fix.go:54] fixHost starting: 
	I1204 15:47:00.270968   11932 fix.go:112] recreateIfNeeded on no-preload-756000: state=Stopped err=<nil>
	W1204 15:47:00.271003   11932 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 15:47:00.275557   11932 out.go:177] * Restarting existing qemu2 VM for "no-preload-756000" ...
	I1204 15:47:00.298381   11932 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:47:00.298611   11932 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/no-preload-756000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/no-preload-756000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/no-preload-756000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:b0:36:d1:c3:19 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/no-preload-756000/disk.qcow2
	I1204 15:47:00.312716   11932 main.go:141] libmachine: STDOUT: 
	I1204 15:47:00.313047   11932 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:47:00.313148   11932 fix.go:56] duration metric: took 42.989875ms for fixHost
	I1204 15:47:00.313170   11932 start.go:83] releasing machines lock for "no-preload-756000", held for 43.13725ms
	W1204 15:47:00.313463   11932 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-756000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-756000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:47:00.322504   11932 out.go:201] 
	W1204 15:47:00.326618   11932 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 15:47:00.326639   11932 out.go:270] * 
	* 
	W1204 15:47:00.328689   11932 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 15:47:00.344597   11932 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-756000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-756000 -n no-preload-756000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-756000 -n no-preload-756000: exit status 7 (66.908833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-756000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-617000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-617000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (9.870895709s)

                                                
                                                
-- stdout --
	* [embed-certs-617000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-617000" primary control-plane node in "embed-certs-617000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-617000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:46:57.877041   11950 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:46:57.877207   11950 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:46:57.877212   11950 out.go:358] Setting ErrFile to fd 2...
	I1204 15:46:57.877214   11950 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:46:57.877372   11950 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:46:57.878689   11950 out.go:352] Setting JSON to false
	I1204 15:46:57.896878   11950 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6387,"bootTime":1733349630,"procs":549,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 15:46:57.896957   11950 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:46:57.899133   11950 out.go:177] * [embed-certs-617000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 15:46:57.904505   11950 notify.go:220] Checking for updates...
	I1204 15:46:57.909342   11950 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 15:46:57.916200   11950 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	I1204 15:46:57.919392   11950 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 15:46:57.923365   11950 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:46:57.929335   11950 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	I1204 15:46:57.936358   11950 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 15:46:57.940691   11950 config.go:182] Loaded profile config "multinode-093000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:46:57.940773   11950 config.go:182] Loaded profile config "no-preload-756000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:46:57.940819   11950 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:46:57.944416   11950 out.go:177] * Using the qemu2 driver based on user configuration
	I1204 15:46:57.950320   11950 start.go:297] selected driver: qemu2
	I1204 15:46:57.950326   11950 start.go:901] validating driver "qemu2" against <nil>
	I1204 15:46:57.950332   11950 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 15:46:57.952910   11950 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 15:46:57.957348   11950 out.go:177] * Automatically selected the socket_vmnet network
	I1204 15:46:57.961445   11950 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 15:46:57.961465   11950 cni.go:84] Creating CNI manager for ""
	I1204 15:46:57.961488   11950 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 15:46:57.961493   11950 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1204 15:46:57.961522   11950 start.go:340] cluster config:
	{Name:embed-certs-617000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-617000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:46:57.966154   11950 iso.go:125] acquiring lock: {Name:mk798ab7885b095925d55c843b0600e7ea181045 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:46:57.973372   11950 out.go:177] * Starting "embed-certs-617000" primary control-plane node in "embed-certs-617000" cluster
	I1204 15:46:57.977363   11950 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:46:57.977381   11950 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 15:46:57.977392   11950 cache.go:56] Caching tarball of preloaded images
	I1204 15:46:57.977474   11950 preload.go:172] Found /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 15:46:57.977482   11950 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 15:46:57.977558   11950 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/embed-certs-617000/config.json ...
	I1204 15:46:57.977572   11950 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/embed-certs-617000/config.json: {Name:mk7c37f8b82e27e40d22f5784d430491e23dbdfc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:46:57.977836   11950 start.go:360] acquireMachinesLock for embed-certs-617000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:46:57.977883   11950 start.go:364] duration metric: took 41.375µs to acquireMachinesLock for "embed-certs-617000"
	I1204 15:46:57.977896   11950 start.go:93] Provisioning new machine with config: &{Name:embed-certs-617000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:embed-certs-617000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:46:57.977929   11950 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 15:46:57.986414   11950 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 15:46:58.003294   11950 start.go:159] libmachine.API.Create for "embed-certs-617000" (driver="qemu2")
	I1204 15:46:58.003320   11950 client.go:168] LocalClient.Create starting
	I1204 15:46:58.003389   11950 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem
	I1204 15:46:58.003426   11950 main.go:141] libmachine: Decoding PEM data...
	I1204 15:46:58.003435   11950 main.go:141] libmachine: Parsing certificate...
	I1204 15:46:58.003473   11950 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem
	I1204 15:46:58.003501   11950 main.go:141] libmachine: Decoding PEM data...
	I1204 15:46:58.003510   11950 main.go:141] libmachine: Parsing certificate...
	I1204 15:46:58.003867   11950 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-6982/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 15:46:58.161877   11950 main.go:141] libmachine: Creating SSH key...
	I1204 15:46:58.246608   11950 main.go:141] libmachine: Creating Disk image...
	I1204 15:46:58.246619   11950 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 15:46:58.246816   11950 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/embed-certs-617000/disk.qcow2.raw /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/embed-certs-617000/disk.qcow2
	I1204 15:46:58.256787   11950 main.go:141] libmachine: STDOUT: 
	I1204 15:46:58.256810   11950 main.go:141] libmachine: STDERR: 
	I1204 15:46:58.256869   11950 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/embed-certs-617000/disk.qcow2 +20000M
	I1204 15:46:58.265533   11950 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 15:46:58.265556   11950 main.go:141] libmachine: STDERR: 
	I1204 15:46:58.265570   11950 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/embed-certs-617000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/embed-certs-617000/disk.qcow2
	I1204 15:46:58.265575   11950 main.go:141] libmachine: Starting QEMU VM...
	I1204 15:46:58.265585   11950 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:46:58.265619   11950 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/embed-certs-617000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/embed-certs-617000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/embed-certs-617000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:d4:31:a2:db:e4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/embed-certs-617000/disk.qcow2
	I1204 15:46:58.267543   11950 main.go:141] libmachine: STDOUT: 
	I1204 15:46:58.267567   11950 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:46:58.267587   11950 client.go:171] duration metric: took 264.256416ms to LocalClient.Create
	I1204 15:47:00.269788   11950 start.go:128] duration metric: took 2.291813542s to createHost
	I1204 15:47:00.269842   11950 start.go:83] releasing machines lock for "embed-certs-617000", held for 2.291927208s
	W1204 15:47:00.269922   11950 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:47:00.294539   11950 out.go:177] * Deleting "embed-certs-617000" in qemu2 ...
	W1204 15:47:00.356903   11950 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:47:00.356936   11950 start.go:729] Will try again in 5 seconds ...
	I1204 15:47:05.359282   11950 start.go:360] acquireMachinesLock for embed-certs-617000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:47:05.359969   11950 start.go:364] duration metric: took 519.959µs to acquireMachinesLock for "embed-certs-617000"
	I1204 15:47:05.360170   11950 start.go:93] Provisioning new machine with config: &{Name:embed-certs-617000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:embed-certs-617000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:47:05.360489   11950 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 15:47:05.380425   11950 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 15:47:05.430260   11950 start.go:159] libmachine.API.Create for "embed-certs-617000" (driver="qemu2")
	I1204 15:47:05.430312   11950 client.go:168] LocalClient.Create starting
	I1204 15:47:05.430462   11950 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem
	I1204 15:47:05.430557   11950 main.go:141] libmachine: Decoding PEM data...
	I1204 15:47:05.430579   11950 main.go:141] libmachine: Parsing certificate...
	I1204 15:47:05.430662   11950 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem
	I1204 15:47:05.430717   11950 main.go:141] libmachine: Decoding PEM data...
	I1204 15:47:05.430729   11950 main.go:141] libmachine: Parsing certificate...
	I1204 15:47:05.431720   11950 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-6982/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 15:47:05.601544   11950 main.go:141] libmachine: Creating SSH key...
	I1204 15:47:05.646120   11950 main.go:141] libmachine: Creating Disk image...
	I1204 15:47:05.646125   11950 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 15:47:05.646315   11950 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/embed-certs-617000/disk.qcow2.raw /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/embed-certs-617000/disk.qcow2
	I1204 15:47:05.656207   11950 main.go:141] libmachine: STDOUT: 
	I1204 15:47:05.656227   11950 main.go:141] libmachine: STDERR: 
	I1204 15:47:05.656292   11950 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/embed-certs-617000/disk.qcow2 +20000M
	I1204 15:47:05.665166   11950 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 15:47:05.665180   11950 main.go:141] libmachine: STDERR: 
	I1204 15:47:05.665195   11950 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/embed-certs-617000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/embed-certs-617000/disk.qcow2
	I1204 15:47:05.665198   11950 main.go:141] libmachine: Starting QEMU VM...
	I1204 15:47:05.665206   11950 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:47:05.665235   11950 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/embed-certs-617000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/embed-certs-617000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/embed-certs-617000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:a7:a7:e7:45:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/embed-certs-617000/disk.qcow2
	I1204 15:47:05.667078   11950 main.go:141] libmachine: STDOUT: 
	I1204 15:47:05.667092   11950 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:47:05.667108   11950 client.go:171] duration metric: took 236.788958ms to LocalClient.Create
	I1204 15:47:07.669352   11950 start.go:128] duration metric: took 2.308782542s to createHost
	I1204 15:47:07.669428   11950 start.go:83] releasing machines lock for "embed-certs-617000", held for 2.309393459s
	W1204 15:47:07.669868   11950 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-617000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-617000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:47:07.681677   11950 out.go:201] 
	W1204 15:47:07.687801   11950 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 15:47:07.687871   11950 out.go:270] * 
	* 
	W1204 15:47:07.690657   11950 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 15:47:07.700615   11950 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-617000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-617000 -n embed-certs-617000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-617000 -n embed-certs-617000: exit status 7 (70.953708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-617000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-756000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-756000 -n no-preload-756000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-756000 -n no-preload-756000: exit status 7 (35.275375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-756000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-756000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-756000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-756000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.391291ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-756000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-756000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-756000 -n no-preload-756000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-756000 -n no-preload-756000: exit status 7 (33.139084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-756000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-756000 image list --format=json
start_stop_delete_test.go:304: v1.31.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.2",
- 	"registry.k8s.io/kube-controller-manager:v1.31.2",
- 	"registry.k8s.io/kube-proxy:v1.31.2",
- 	"registry.k8s.io/kube-scheduler:v1.31.2",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-756000 -n no-preload-756000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-756000 -n no-preload-756000: exit status 7 (33.489666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-756000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-756000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-756000 --alsologtostderr -v=1: exit status 83 (51.068125ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-756000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-756000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:47:00.641332   11972 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:47:00.641517   11972 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:47:00.641520   11972 out.go:358] Setting ErrFile to fd 2...
	I1204 15:47:00.641522   11972 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:47:00.641674   11972 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:47:00.641921   11972 out.go:352] Setting JSON to false
	I1204 15:47:00.641930   11972 mustload.go:65] Loading cluster: no-preload-756000
	I1204 15:47:00.642139   11972 config.go:182] Loaded profile config "no-preload-756000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:47:00.646609   11972 out.go:177] * The control-plane node no-preload-756000 host is not running: state=Stopped
	I1204 15:47:00.654840   11972 out.go:177]   To start a cluster, run: "minikube start -p no-preload-756000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-756000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-756000 -n no-preload-756000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-756000 -n no-preload-756000: exit status 7 (33.453875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-756000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-756000 -n no-preload-756000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-756000 -n no-preload-756000: exit status 7 (33.088417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-756000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-845000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-845000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (10.0219015s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-845000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-845000" primary control-plane node in "default-k8s-diff-port-845000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-845000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:47:01.108887   11998 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:47:01.109043   11998 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:47:01.109047   11998 out.go:358] Setting ErrFile to fd 2...
	I1204 15:47:01.109050   11998 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:47:01.109168   11998 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:47:01.110284   11998 out.go:352] Setting JSON to false
	I1204 15:47:01.128101   11998 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6391,"bootTime":1733349630,"procs":547,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 15:47:01.128204   11998 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:47:01.132746   11998 out.go:177] * [default-k8s-diff-port-845000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 15:47:01.139793   11998 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 15:47:01.139819   11998 notify.go:220] Checking for updates...
	I1204 15:47:01.147757   11998 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	I1204 15:47:01.151639   11998 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 15:47:01.154754   11998 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:47:01.157787   11998 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	I1204 15:47:01.160793   11998 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 15:47:01.169037   11998 config.go:182] Loaded profile config "embed-certs-617000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:47:01.169111   11998 config.go:182] Loaded profile config "multinode-093000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:47:01.169168   11998 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:47:01.171716   11998 out.go:177] * Using the qemu2 driver based on user configuration
	I1204 15:47:01.176754   11998 start.go:297] selected driver: qemu2
	I1204 15:47:01.176762   11998 start.go:901] validating driver "qemu2" against <nil>
	I1204 15:47:01.176772   11998 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 15:47:01.179359   11998 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 15:47:01.182761   11998 out.go:177] * Automatically selected the socket_vmnet network
	I1204 15:47:01.186677   11998 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 15:47:01.186693   11998 cni.go:84] Creating CNI manager for ""
	I1204 15:47:01.186715   11998 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 15:47:01.186719   11998 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1204 15:47:01.186762   11998 start.go:340] cluster config:
	{Name:default-k8s-diff-port-845000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-845000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:47:01.191723   11998 iso.go:125] acquiring lock: {Name:mk798ab7885b095925d55c843b0600e7ea181045 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:47:01.199792   11998 out.go:177] * Starting "default-k8s-diff-port-845000" primary control-plane node in "default-k8s-diff-port-845000" cluster
	I1204 15:47:01.203709   11998 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:47:01.203730   11998 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 15:47:01.203738   11998 cache.go:56] Caching tarball of preloaded images
	I1204 15:47:01.203844   11998 preload.go:172] Found /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 15:47:01.203851   11998 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 15:47:01.203921   11998 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/default-k8s-diff-port-845000/config.json ...
	I1204 15:47:01.203937   11998 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/default-k8s-diff-port-845000/config.json: {Name:mk2a40ea225a30c3a208a478dff224fa0195256d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:47:01.204415   11998 start.go:360] acquireMachinesLock for default-k8s-diff-port-845000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:47:01.204485   11998 start.go:364] duration metric: took 48.834µs to acquireMachinesLock for "default-k8s-diff-port-845000"
	I1204 15:47:01.204500   11998 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-845000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-845000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:47:01.204531   11998 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 15:47:01.213799   11998 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 15:47:01.231532   11998 start.go:159] libmachine.API.Create for "default-k8s-diff-port-845000" (driver="qemu2")
	I1204 15:47:01.231568   11998 client.go:168] LocalClient.Create starting
	I1204 15:47:01.231643   11998 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem
	I1204 15:47:01.231684   11998 main.go:141] libmachine: Decoding PEM data...
	I1204 15:47:01.231699   11998 main.go:141] libmachine: Parsing certificate...
	I1204 15:47:01.231737   11998 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem
	I1204 15:47:01.231771   11998 main.go:141] libmachine: Decoding PEM data...
	I1204 15:47:01.231779   11998 main.go:141] libmachine: Parsing certificate...
	I1204 15:47:01.232235   11998 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-6982/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 15:47:01.391326   11998 main.go:141] libmachine: Creating SSH key...
	I1204 15:47:01.462802   11998 main.go:141] libmachine: Creating Disk image...
	I1204 15:47:01.462808   11998 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 15:47:01.462998   11998 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/default-k8s-diff-port-845000/disk.qcow2.raw /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/default-k8s-diff-port-845000/disk.qcow2
	I1204 15:47:01.472847   11998 main.go:141] libmachine: STDOUT: 
	I1204 15:47:01.472870   11998 main.go:141] libmachine: STDERR: 
	I1204 15:47:01.472972   11998 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/default-k8s-diff-port-845000/disk.qcow2 +20000M
	I1204 15:47:01.481427   11998 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 15:47:01.481442   11998 main.go:141] libmachine: STDERR: 
	I1204 15:47:01.481457   11998 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/default-k8s-diff-port-845000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/default-k8s-diff-port-845000/disk.qcow2
	I1204 15:47:01.481469   11998 main.go:141] libmachine: Starting QEMU VM...
	I1204 15:47:01.481479   11998 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:47:01.481511   11998 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/default-k8s-diff-port-845000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/default-k8s-diff-port-845000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/default-k8s-diff-port-845000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:f6:87:bd:30:33 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/default-k8s-diff-port-845000/disk.qcow2
	I1204 15:47:01.483361   11998 main.go:141] libmachine: STDOUT: 
	I1204 15:47:01.483387   11998 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:47:01.483409   11998 client.go:171] duration metric: took 251.833041ms to LocalClient.Create
	I1204 15:47:03.485616   11998 start.go:128] duration metric: took 2.281033292s to createHost
	I1204 15:47:03.485679   11998 start.go:83] releasing machines lock for "default-k8s-diff-port-845000", held for 2.281161708s
	W1204 15:47:03.485750   11998 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:47:03.495798   11998 out.go:177] * Deleting "default-k8s-diff-port-845000" in qemu2 ...
	W1204 15:47:03.530503   11998 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:47:03.530546   11998 start.go:729] Will try again in 5 seconds ...
	I1204 15:47:08.532738   11998 start.go:360] acquireMachinesLock for default-k8s-diff-port-845000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:47:08.533127   11998 start.go:364] duration metric: took 337.083µs to acquireMachinesLock for "default-k8s-diff-port-845000"
	I1204 15:47:08.533235   11998 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-845000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-845000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:47:08.533419   11998 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 15:47:08.538985   11998 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 15:47:08.583155   11998 start.go:159] libmachine.API.Create for "default-k8s-diff-port-845000" (driver="qemu2")
	I1204 15:47:08.583237   11998 client.go:168] LocalClient.Create starting
	I1204 15:47:08.583386   11998 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem
	I1204 15:47:08.583478   11998 main.go:141] libmachine: Decoding PEM data...
	I1204 15:47:08.583500   11998 main.go:141] libmachine: Parsing certificate...
	I1204 15:47:08.583568   11998 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem
	I1204 15:47:08.583636   11998 main.go:141] libmachine: Decoding PEM data...
	I1204 15:47:08.583652   11998 main.go:141] libmachine: Parsing certificate...
	I1204 15:47:08.584361   11998 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-6982/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 15:47:08.793807   11998 main.go:141] libmachine: Creating SSH key...
	I1204 15:47:09.026777   11998 main.go:141] libmachine: Creating Disk image...
	I1204 15:47:09.026792   11998 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 15:47:09.027009   11998 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/default-k8s-diff-port-845000/disk.qcow2.raw /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/default-k8s-diff-port-845000/disk.qcow2
	I1204 15:47:09.036886   11998 main.go:141] libmachine: STDOUT: 
	I1204 15:47:09.036924   11998 main.go:141] libmachine: STDERR: 
	I1204 15:47:09.036997   11998 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/default-k8s-diff-port-845000/disk.qcow2 +20000M
	I1204 15:47:09.045801   11998 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 15:47:09.045818   11998 main.go:141] libmachine: STDERR: 
	I1204 15:47:09.045841   11998 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/default-k8s-diff-port-845000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/default-k8s-diff-port-845000/disk.qcow2
	I1204 15:47:09.045847   11998 main.go:141] libmachine: Starting QEMU VM...
	I1204 15:47:09.045854   11998 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:47:09.045890   11998 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/default-k8s-diff-port-845000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/default-k8s-diff-port-845000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/default-k8s-diff-port-845000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:e2:21:5d:20:e5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/default-k8s-diff-port-845000/disk.qcow2
	I1204 15:47:09.047734   11998 main.go:141] libmachine: STDOUT: 
	I1204 15:47:09.047750   11998 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:47:09.047761   11998 client.go:171] duration metric: took 464.513583ms to LocalClient.Create
	I1204 15:47:11.049989   11998 start.go:128] duration metric: took 2.51651275s to createHost
	I1204 15:47:11.050058   11998 start.go:83] releasing machines lock for "default-k8s-diff-port-845000", held for 2.51688625s
	W1204 15:47:11.050450   11998 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-845000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-845000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:47:11.066300   11998 out.go:201] 
	W1204 15:47:11.070344   11998 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 15:47:11.070367   11998 out.go:270] * 
	* 
	W1204 15:47:11.072925   11998 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 15:47:11.085301   11998 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-845000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-845000 -n default-k8s-diff-port-845000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-845000 -n default-k8s-diff-port-845000: exit status 7 (68.29625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-845000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-617000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-617000 create -f testdata/busybox.yaml: exit status 1 (29.060583ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-617000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-617000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-617000 -n embed-certs-617000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-617000 -n embed-certs-617000: exit status 7 (33.416458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-617000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-617000 -n embed-certs-617000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-617000 -n embed-certs-617000: exit status 7 (33.536417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-617000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-617000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-617000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-617000 describe deploy/metrics-server -n kube-system: exit status 1 (27.363958ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-617000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-617000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-617000 -n embed-certs-617000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-617000 -n embed-certs-617000: exit status 7 (33.528875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-617000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-845000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-845000 create -f testdata/busybox.yaml: exit status 1 (29.951334ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-845000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-845000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-845000 -n default-k8s-diff-port-845000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-845000 -n default-k8s-diff-port-845000: exit status 7 (33.455333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-845000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-845000 -n default-k8s-diff-port-845000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-845000 -n default-k8s-diff-port-845000: exit status 7 (33.129958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-845000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-845000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-845000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-845000 describe deploy/metrics-server -n kube-system: exit status 1 (27.461917ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-845000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-845000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-845000 -n default-k8s-diff-port-845000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-845000 -n default-k8s-diff-port-845000: exit status 7 (33.447125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-845000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-617000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-617000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (5.190888917s)

                                                
                                                
-- stdout --
	* [embed-certs-617000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-617000" primary control-plane node in "embed-certs-617000" cluster
	* Restarting existing qemu2 VM for "embed-certs-617000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-617000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:47:12.029132   12074 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:47:12.029309   12074 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:47:12.029312   12074 out.go:358] Setting ErrFile to fd 2...
	I1204 15:47:12.029314   12074 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:47:12.029467   12074 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:47:12.030566   12074 out.go:352] Setting JSON to false
	I1204 15:47:12.048423   12074 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6402,"bootTime":1733349630,"procs":549,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 15:47:12.048522   12074 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:47:12.052785   12074 out.go:177] * [embed-certs-617000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 15:47:12.059678   12074 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 15:47:12.059709   12074 notify.go:220] Checking for updates...
	I1204 15:47:12.066629   12074 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	I1204 15:47:12.069666   12074 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 15:47:12.072682   12074 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:47:12.075668   12074 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	I1204 15:47:12.078651   12074 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 15:47:12.081938   12074 config.go:182] Loaded profile config "embed-certs-617000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:47:12.082200   12074 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:47:12.085644   12074 out.go:177] * Using the qemu2 driver based on existing profile
	I1204 15:47:12.092681   12074 start.go:297] selected driver: qemu2
	I1204 15:47:12.092688   12074 start.go:901] validating driver "qemu2" against &{Name:embed-certs-617000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:embed-certs-617000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:47:12.092741   12074 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 15:47:12.095353   12074 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 15:47:12.095379   12074 cni.go:84] Creating CNI manager for ""
	I1204 15:47:12.095412   12074 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 15:47:12.095442   12074 start.go:340] cluster config:
	{Name:embed-certs-617000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:embed-certs-617000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:47:12.099798   12074 iso.go:125] acquiring lock: {Name:mk798ab7885b095925d55c843b0600e7ea181045 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:47:12.107696   12074 out.go:177] * Starting "embed-certs-617000" primary control-plane node in "embed-certs-617000" cluster
	I1204 15:47:12.111600   12074 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:47:12.111616   12074 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 15:47:12.111625   12074 cache.go:56] Caching tarball of preloaded images
	I1204 15:47:12.111701   12074 preload.go:172] Found /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 15:47:12.111715   12074 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 15:47:12.111763   12074 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/embed-certs-617000/config.json ...
	I1204 15:47:12.112427   12074 start.go:360] acquireMachinesLock for embed-certs-617000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:47:12.112464   12074 start.go:364] duration metric: took 31.208µs to acquireMachinesLock for "embed-certs-617000"
	I1204 15:47:12.112484   12074 start.go:96] Skipping create...Using existing machine configuration
	I1204 15:47:12.112488   12074 fix.go:54] fixHost starting: 
	I1204 15:47:12.112611   12074 fix.go:112] recreateIfNeeded on embed-certs-617000: state=Stopped err=<nil>
	W1204 15:47:12.112621   12074 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 15:47:12.117707   12074 out.go:177] * Restarting existing qemu2 VM for "embed-certs-617000" ...
	I1204 15:47:12.125671   12074 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:47:12.125722   12074 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/embed-certs-617000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/embed-certs-617000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/embed-certs-617000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:a7:a7:e7:45:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/embed-certs-617000/disk.qcow2
	I1204 15:47:12.128005   12074 main.go:141] libmachine: STDOUT: 
	I1204 15:47:12.128028   12074 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:47:12.128060   12074 fix.go:56] duration metric: took 15.569083ms for fixHost
	I1204 15:47:12.128066   12074 start.go:83] releasing machines lock for "embed-certs-617000", held for 15.596916ms
	W1204 15:47:12.128073   12074 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 15:47:12.128129   12074 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:47:12.128134   12074 start.go:729] Will try again in 5 seconds ...
	I1204 15:47:17.130460   12074 start.go:360] acquireMachinesLock for embed-certs-617000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:47:17.130886   12074 start.go:364] duration metric: took 326.834µs to acquireMachinesLock for "embed-certs-617000"
	I1204 15:47:17.131024   12074 start.go:96] Skipping create...Using existing machine configuration
	I1204 15:47:17.131042   12074 fix.go:54] fixHost starting: 
	I1204 15:47:17.131759   12074 fix.go:112] recreateIfNeeded on embed-certs-617000: state=Stopped err=<nil>
	W1204 15:47:17.131784   12074 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 15:47:17.140360   12074 out.go:177] * Restarting existing qemu2 VM for "embed-certs-617000" ...
	I1204 15:47:17.143469   12074 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:47:17.143756   12074 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/embed-certs-617000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/embed-certs-617000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/embed-certs-617000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:a7:a7:e7:45:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/embed-certs-617000/disk.qcow2
	I1204 15:47:17.153453   12074 main.go:141] libmachine: STDOUT: 
	I1204 15:47:17.153506   12074 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:47:17.153583   12074 fix.go:56] duration metric: took 22.542917ms for fixHost
	I1204 15:47:17.153609   12074 start.go:83] releasing machines lock for "embed-certs-617000", held for 22.701708ms
	W1204 15:47:17.153792   12074 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-617000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-617000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:47:17.162453   12074 out.go:201] 
	W1204 15:47:17.166516   12074 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 15:47:17.166544   12074 out.go:270] * 
	* 
	W1204 15:47:17.169226   12074 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 15:47:17.174956   12074 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-617000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-617000 -n embed-certs-617000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-617000 -n embed-certs-617000: exit status 7 (75.306916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-617000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-845000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-845000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (5.340725834s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-845000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-845000" primary control-plane node in "default-k8s-diff-port-845000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-845000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-845000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:47:14.947032   12100 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:47:14.947175   12100 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:47:14.947177   12100 out.go:358] Setting ErrFile to fd 2...
	I1204 15:47:14.947180   12100 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:47:14.947299   12100 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:47:14.948300   12100 out.go:352] Setting JSON to false
	I1204 15:47:14.966709   12100 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6404,"bootTime":1733349630,"procs":548,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 15:47:14.966787   12100 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:47:14.971743   12100 out.go:177] * [default-k8s-diff-port-845000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 15:47:14.978723   12100 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 15:47:14.978723   12100 notify.go:220] Checking for updates...
	I1204 15:47:14.985649   12100 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	I1204 15:47:14.988660   12100 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 15:47:14.991686   12100 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:47:14.994759   12100 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	I1204 15:47:14.997664   12100 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 15:47:15.001017   12100 config.go:182] Loaded profile config "default-k8s-diff-port-845000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:47:15.001295   12100 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:47:15.004770   12100 out.go:177] * Using the qemu2 driver based on existing profile
	I1204 15:47:15.011719   12100 start.go:297] selected driver: qemu2
	I1204 15:47:15.011725   12100 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-845000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-845000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:47:15.011782   12100 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 15:47:15.014300   12100 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 15:47:15.014327   12100 cni.go:84] Creating CNI manager for ""
	I1204 15:47:15.014353   12100 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 15:47:15.014377   12100 start.go:340] cluster config:
	{Name:default-k8s-diff-port-845000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-845000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:47:15.018795   12100 iso.go:125] acquiring lock: {Name:mk798ab7885b095925d55c843b0600e7ea181045 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:47:15.026648   12100 out.go:177] * Starting "default-k8s-diff-port-845000" primary control-plane node in "default-k8s-diff-port-845000" cluster
	I1204 15:47:15.029717   12100 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:47:15.029731   12100 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 15:47:15.029743   12100 cache.go:56] Caching tarball of preloaded images
	I1204 15:47:15.029795   12100 preload.go:172] Found /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 15:47:15.029800   12100 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 15:47:15.029850   12100 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/default-k8s-diff-port-845000/config.json ...
	I1204 15:47:15.030355   12100 start.go:360] acquireMachinesLock for default-k8s-diff-port-845000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:47:15.030388   12100 start.go:364] duration metric: took 26.292µs to acquireMachinesLock for "default-k8s-diff-port-845000"
	I1204 15:47:15.030398   12100 start.go:96] Skipping create...Using existing machine configuration
	I1204 15:47:15.030401   12100 fix.go:54] fixHost starting: 
	I1204 15:47:15.030528   12100 fix.go:112] recreateIfNeeded on default-k8s-diff-port-845000: state=Stopped err=<nil>
	W1204 15:47:15.030536   12100 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 15:47:15.033725   12100 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-845000" ...
	I1204 15:47:15.041665   12100 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:47:15.041696   12100 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/default-k8s-diff-port-845000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/default-k8s-diff-port-845000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/default-k8s-diff-port-845000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:e2:21:5d:20:e5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/default-k8s-diff-port-845000/disk.qcow2
	I1204 15:47:15.043892   12100 main.go:141] libmachine: STDOUT: 
	I1204 15:47:15.043911   12100 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:47:15.043943   12100 fix.go:56] duration metric: took 13.538625ms for fixHost
	I1204 15:47:15.043947   12100 start.go:83] releasing machines lock for "default-k8s-diff-port-845000", held for 13.55475ms
	W1204 15:47:15.043954   12100 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 15:47:15.043987   12100 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:47:15.043991   12100 start.go:729] Will try again in 5 seconds ...
	I1204 15:47:20.046339   12100 start.go:360] acquireMachinesLock for default-k8s-diff-port-845000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:47:20.176480   12100 start.go:364] duration metric: took 130.001ms to acquireMachinesLock for "default-k8s-diff-port-845000"
	I1204 15:47:20.176576   12100 start.go:96] Skipping create...Using existing machine configuration
	I1204 15:47:20.176595   12100 fix.go:54] fixHost starting: 
	I1204 15:47:20.177374   12100 fix.go:112] recreateIfNeeded on default-k8s-diff-port-845000: state=Stopped err=<nil>
	W1204 15:47:20.177401   12100 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 15:47:20.185901   12100 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-845000" ...
	I1204 15:47:20.204789   12100 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:47:20.205078   12100 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/default-k8s-diff-port-845000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/default-k8s-diff-port-845000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/default-k8s-diff-port-845000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:e2:21:5d:20:e5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/default-k8s-diff-port-845000/disk.qcow2
	I1204 15:47:20.216080   12100 main.go:141] libmachine: STDOUT: 
	I1204 15:47:20.216137   12100 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:47:20.216221   12100 fix.go:56] duration metric: took 39.6265ms for fixHost
	I1204 15:47:20.216237   12100 start.go:83] releasing machines lock for "default-k8s-diff-port-845000", held for 39.706834ms
	W1204 15:47:20.216417   12100 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-845000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-845000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:47:20.222867   12100 out.go:201] 
	W1204 15:47:20.227022   12100 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 15:47:20.227050   12100 out.go:270] * 
	* 
	W1204 15:47:20.228951   12100 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 15:47:20.239885   12100 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-845000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-845000 -n default-k8s-diff-port-845000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-845000 -n default-k8s-diff-port-845000: exit status 7 (65.40575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-845000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-617000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-617000 -n embed-certs-617000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-617000 -n embed-certs-617000: exit status 7 (35.493375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-617000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-617000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-617000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-617000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.371833ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-617000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-617000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-617000 -n embed-certs-617000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-617000 -n embed-certs-617000: exit status 7 (33.843917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-617000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-617000 image list --format=json
start_stop_delete_test.go:304: v1.31.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.2",
- 	"registry.k8s.io/kube-controller-manager:v1.31.2",
- 	"registry.k8s.io/kube-proxy:v1.31.2",
- 	"registry.k8s.io/kube-scheduler:v1.31.2",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-617000 -n embed-certs-617000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-617000 -n embed-certs-617000: exit status 7 (33.581542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-617000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-617000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-617000 --alsologtostderr -v=1: exit status 83 (44.036ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-617000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-617000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:47:17.471323   12119 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:47:17.471528   12119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:47:17.471531   12119 out.go:358] Setting ErrFile to fd 2...
	I1204 15:47:17.471533   12119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:47:17.471655   12119 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:47:17.471884   12119 out.go:352] Setting JSON to false
	I1204 15:47:17.471893   12119 mustload.go:65] Loading cluster: embed-certs-617000
	I1204 15:47:17.472110   12119 config.go:182] Loaded profile config "embed-certs-617000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:47:17.475100   12119 out.go:177] * The control-plane node embed-certs-617000 host is not running: state=Stopped
	I1204 15:47:17.479104   12119 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-617000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-617000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-617000 -n embed-certs-617000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-617000 -n embed-certs-617000: exit status 7 (33.549958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-617000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-617000 -n embed-certs-617000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-617000 -n embed-certs-617000: exit status 7 (33.1345ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-617000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (10.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-033000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-033000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (10.27551525s)

                                                
                                                
-- stdout --
	* [newest-cni-033000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-033000" primary control-plane node in "newest-cni-033000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-033000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:47:17.808158   12136 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:47:17.808321   12136 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:47:17.808323   12136 out.go:358] Setting ErrFile to fd 2...
	I1204 15:47:17.808326   12136 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:47:17.808461   12136 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:47:17.809700   12136 out.go:352] Setting JSON to false
	I1204 15:47:17.828306   12136 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6407,"bootTime":1733349630,"procs":548,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 15:47:17.828387   12136 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:47:17.832281   12136 out.go:177] * [newest-cni-033000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 15:47:17.840148   12136 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 15:47:17.840208   12136 notify.go:220] Checking for updates...
	I1204 15:47:17.846086   12136 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	I1204 15:47:17.849169   12136 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 15:47:17.850620   12136 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:47:17.854177   12136 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	I1204 15:47:17.857118   12136 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 15:47:17.860527   12136 config.go:182] Loaded profile config "default-k8s-diff-port-845000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:47:17.860587   12136 config.go:182] Loaded profile config "multinode-093000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:47:17.860645   12136 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:47:17.864056   12136 out.go:177] * Using the qemu2 driver based on user configuration
	I1204 15:47:17.871175   12136 start.go:297] selected driver: qemu2
	I1204 15:47:17.871183   12136 start.go:901] validating driver "qemu2" against <nil>
	I1204 15:47:17.871190   12136 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 15:47:17.873669   12136 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W1204 15:47:17.873714   12136 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1204 15:47:17.882133   12136 out.go:177] * Automatically selected the socket_vmnet network
	I1204 15:47:17.885226   12136 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1204 15:47:17.885240   12136 cni.go:84] Creating CNI manager for ""
	I1204 15:47:17.885262   12136 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 15:47:17.885273   12136 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1204 15:47:17.885306   12136 start.go:340] cluster config:
	{Name:newest-cni-033000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-033000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:47:17.889980   12136 iso.go:125] acquiring lock: {Name:mk798ab7885b095925d55c843b0600e7ea181045 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:47:17.898129   12136 out.go:177] * Starting "newest-cni-033000" primary control-plane node in "newest-cni-033000" cluster
	I1204 15:47:17.902059   12136 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:47:17.902077   12136 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 15:47:17.902086   12136 cache.go:56] Caching tarball of preloaded images
	I1204 15:47:17.902167   12136 preload.go:172] Found /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 15:47:17.902173   12136 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 15:47:17.902226   12136 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/newest-cni-033000/config.json ...
	I1204 15:47:17.902237   12136 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/newest-cni-033000/config.json: {Name:mkda3c59d7c25e14acdec5d9d061f5d2d9ecb0e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:47:17.902698   12136 start.go:360] acquireMachinesLock for newest-cni-033000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:47:17.902749   12136 start.go:364] duration metric: took 45.167µs to acquireMachinesLock for "newest-cni-033000"
	I1204 15:47:17.902762   12136 start.go:93] Provisioning new machine with config: &{Name:newest-cni-033000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:newest-cni-033000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:47:17.902813   12136 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 15:47:17.911076   12136 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 15:47:17.929291   12136 start.go:159] libmachine.API.Create for "newest-cni-033000" (driver="qemu2")
	I1204 15:47:17.929313   12136 client.go:168] LocalClient.Create starting
	I1204 15:47:17.929378   12136 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem
	I1204 15:47:17.929415   12136 main.go:141] libmachine: Decoding PEM data...
	I1204 15:47:17.929429   12136 main.go:141] libmachine: Parsing certificate...
	I1204 15:47:17.929467   12136 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem
	I1204 15:47:17.929501   12136 main.go:141] libmachine: Decoding PEM data...
	I1204 15:47:17.929511   12136 main.go:141] libmachine: Parsing certificate...
	I1204 15:47:17.929909   12136 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-6982/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 15:47:18.089235   12136 main.go:141] libmachine: Creating SSH key...
	I1204 15:47:18.153429   12136 main.go:141] libmachine: Creating Disk image...
	I1204 15:47:18.153435   12136 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 15:47:18.153634   12136 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/newest-cni-033000/disk.qcow2.raw /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/newest-cni-033000/disk.qcow2
	I1204 15:47:18.163526   12136 main.go:141] libmachine: STDOUT: 
	I1204 15:47:18.163543   12136 main.go:141] libmachine: STDERR: 
	I1204 15:47:18.163604   12136 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/newest-cni-033000/disk.qcow2 +20000M
	I1204 15:47:18.172031   12136 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 15:47:18.172050   12136 main.go:141] libmachine: STDERR: 
	I1204 15:47:18.172094   12136 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/newest-cni-033000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/newest-cni-033000/disk.qcow2
	I1204 15:47:18.172099   12136 main.go:141] libmachine: Starting QEMU VM...
	I1204 15:47:18.172113   12136 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:47:18.172150   12136 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/newest-cni-033000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/newest-cni-033000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/newest-cni-033000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:c8:36:31:96:aa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/newest-cni-033000/disk.qcow2
	I1204 15:47:18.174034   12136 main.go:141] libmachine: STDOUT: 
	I1204 15:47:18.174050   12136 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:47:18.174075   12136 client.go:171] duration metric: took 244.753458ms to LocalClient.Create
	I1204 15:47:20.176267   12136 start.go:128] duration metric: took 2.273412875s to createHost
	I1204 15:47:20.176316   12136 start.go:83] releasing machines lock for "newest-cni-033000", held for 2.2735365s
	W1204 15:47:20.176387   12136 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:47:20.201929   12136 out.go:177] * Deleting "newest-cni-033000" in qemu2 ...
	W1204 15:47:20.255879   12136 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:47:20.255913   12136 start.go:729] Will try again in 5 seconds ...
	I1204 15:47:25.258328   12136 start.go:360] acquireMachinesLock for newest-cni-033000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:47:25.259060   12136 start.go:364] duration metric: took 542.25µs to acquireMachinesLock for "newest-cni-033000"
	I1204 15:47:25.259225   12136 start.go:93] Provisioning new machine with config: &{Name:newest-cni-033000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.2 ClusterName:newest-cni-033000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1204 15:47:25.259632   12136 start.go:125] createHost starting for "" (driver="qemu2")
	I1204 15:47:25.265441   12136 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1204 15:47:25.315666   12136 start.go:159] libmachine.API.Create for "newest-cni-033000" (driver="qemu2")
	I1204 15:47:25.315722   12136 client.go:168] LocalClient.Create starting
	I1204 15:47:25.315872   12136 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/ca.pem
	I1204 15:47:25.315952   12136 main.go:141] libmachine: Decoding PEM data...
	I1204 15:47:25.315970   12136 main.go:141] libmachine: Parsing certificate...
	I1204 15:47:25.316046   12136 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/20045-6982/.minikube/certs/cert.pem
	I1204 15:47:25.316107   12136 main.go:141] libmachine: Decoding PEM data...
	I1204 15:47:25.316118   12136 main.go:141] libmachine: Parsing certificate...
	I1204 15:47:25.316755   12136 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/20045-6982/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso...
	I1204 15:47:25.488030   12136 main.go:141] libmachine: Creating SSH key...
	I1204 15:47:25.982665   12136 main.go:141] libmachine: Creating Disk image...
	I1204 15:47:25.982680   12136 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1204 15:47:25.982887   12136 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/newest-cni-033000/disk.qcow2.raw /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/newest-cni-033000/disk.qcow2
	I1204 15:47:25.993300   12136 main.go:141] libmachine: STDOUT: 
	I1204 15:47:25.993326   12136 main.go:141] libmachine: STDERR: 
	I1204 15:47:25.993408   12136 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/newest-cni-033000/disk.qcow2 +20000M
	I1204 15:47:26.002105   12136 main.go:141] libmachine: STDOUT: Image resized.
	
	I1204 15:47:26.002120   12136 main.go:141] libmachine: STDERR: 
	I1204 15:47:26.002133   12136 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/newest-cni-033000/disk.qcow2.raw and /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/newest-cni-033000/disk.qcow2
	I1204 15:47:26.002141   12136 main.go:141] libmachine: Starting QEMU VM...
	I1204 15:47:26.002150   12136 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:47:26.002187   12136 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/newest-cni-033000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/newest-cni-033000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/newest-cni-033000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:9e:6b:33:e5:84 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/newest-cni-033000/disk.qcow2
	I1204 15:47:26.004012   12136 main.go:141] libmachine: STDOUT: 
	I1204 15:47:26.004026   12136 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:47:26.004044   12136 client.go:171] duration metric: took 688.30975ms to LocalClient.Create
	I1204 15:47:28.006422   12136 start.go:128] duration metric: took 2.74666775s to createHost
	I1204 15:47:28.006535   12136 start.go:83] releasing machines lock for "newest-cni-033000", held for 2.747426584s
	W1204 15:47:28.006967   12136 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-033000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-033000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:47:28.016578   12136 out.go:201] 
	W1204 15:47:28.025788   12136 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 15:47:28.025867   12136 out.go:270] * 
	* 
	W1204 15:47:28.028812   12136 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 15:47:28.042625   12136 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-033000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-033000 -n newest-cni-033000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-033000 -n newest-cni-033000: exit status 7 (70.854791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-033000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (10.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-845000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-845000 -n default-k8s-diff-port-845000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-845000 -n default-k8s-diff-port-845000: exit status 7 (34.572583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-845000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-845000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-845000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-845000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.835541ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-845000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-845000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-845000 -n default-k8s-diff-port-845000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-845000 -n default-k8s-diff-port-845000: exit status 7 (33.454916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-845000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-845000 image list --format=json
start_stop_delete_test.go:304: v1.31.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.2",
- 	"registry.k8s.io/kube-controller-manager:v1.31.2",
- 	"registry.k8s.io/kube-proxy:v1.31.2",
- 	"registry.k8s.io/kube-scheduler:v1.31.2",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-845000 -n default-k8s-diff-port-845000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-845000 -n default-k8s-diff-port-845000: exit status 7 (33.011834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-845000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-845000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-845000 --alsologtostderr -v=1: exit status 83 (47.405333ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-845000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-845000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:47:20.525338   12158 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:47:20.525545   12158 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:47:20.525548   12158 out.go:358] Setting ErrFile to fd 2...
	I1204 15:47:20.525551   12158 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:47:20.525682   12158 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:47:20.525918   12158 out.go:352] Setting JSON to false
	I1204 15:47:20.525925   12158 mustload.go:65] Loading cluster: default-k8s-diff-port-845000
	I1204 15:47:20.526149   12158 config.go:182] Loaded profile config "default-k8s-diff-port-845000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:47:20.529516   12158 out.go:177] * The control-plane node default-k8s-diff-port-845000 host is not running: state=Stopped
	I1204 15:47:20.535742   12158 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-845000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-845000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-845000 -n default-k8s-diff-port-845000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-845000 -n default-k8s-diff-port-845000: exit status 7 (32.823125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-845000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-845000 -n default-k8s-diff-port-845000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-845000 -n default-k8s-diff-port-845000: exit status 7 (32.997959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-845000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-033000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-033000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2: exit status 80 (5.195911375s)

                                                
                                                
-- stdout --
	* [newest-cni-033000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-033000" primary control-plane node in "newest-cni-033000" cluster
	* Restarting existing qemu2 VM for "newest-cni-033000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-033000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:47:31.794050   12214 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:47:31.794206   12214 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:47:31.794209   12214 out.go:358] Setting ErrFile to fd 2...
	I1204 15:47:31.794211   12214 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:47:31.794346   12214 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:47:31.795423   12214 out.go:352] Setting JSON to false
	I1204 15:47:31.813241   12214 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":6421,"bootTime":1733349630,"procs":547,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 15:47:31.813327   12214 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:47:31.819163   12214 out.go:177] * [newest-cni-033000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 15:47:31.827146   12214 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 15:47:31.827206   12214 notify.go:220] Checking for updates...
	I1204 15:47:31.835107   12214 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	I1204 15:47:31.839092   12214 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 15:47:31.842125   12214 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:47:31.845159   12214 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	I1204 15:47:31.848144   12214 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 15:47:31.851309   12214 config.go:182] Loaded profile config "newest-cni-033000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:47:31.851575   12214 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:47:31.856149   12214 out.go:177] * Using the qemu2 driver based on existing profile
	I1204 15:47:31.863074   12214 start.go:297] selected driver: qemu2
	I1204 15:47:31.863080   12214 start.go:901] validating driver "qemu2" against &{Name:newest-cni-033000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:newest-cni-033000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:47:31.863124   12214 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 15:47:31.865759   12214 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1204 15:47:31.865783   12214 cni.go:84] Creating CNI manager for ""
	I1204 15:47:31.865804   12214 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 15:47:31.865828   12214 start.go:340] cluster config:
	{Name:newest-cni-033000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-033000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:47:31.870403   12214 iso.go:125] acquiring lock: {Name:mk798ab7885b095925d55c843b0600e7ea181045 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:47:31.879071   12214 out.go:177] * Starting "newest-cni-033000" primary control-plane node in "newest-cni-033000" cluster
	I1204 15:47:31.882143   12214 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:47:31.882159   12214 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 15:47:31.882173   12214 cache.go:56] Caching tarball of preloaded images
	I1204 15:47:31.882255   12214 preload.go:172] Found /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1204 15:47:31.882261   12214 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on docker
	I1204 15:47:31.882329   12214 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/newest-cni-033000/config.json ...
	I1204 15:47:31.882846   12214 start.go:360] acquireMachinesLock for newest-cni-033000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:47:31.882877   12214 start.go:364] duration metric: took 25.084µs to acquireMachinesLock for "newest-cni-033000"
	I1204 15:47:31.882888   12214 start.go:96] Skipping create...Using existing machine configuration
	I1204 15:47:31.882892   12214 fix.go:54] fixHost starting: 
	I1204 15:47:31.883019   12214 fix.go:112] recreateIfNeeded on newest-cni-033000: state=Stopped err=<nil>
	W1204 15:47:31.883027   12214 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 15:47:31.887099   12214 out.go:177] * Restarting existing qemu2 VM for "newest-cni-033000" ...
	I1204 15:47:31.894016   12214 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:47:31.894046   12214 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/newest-cni-033000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/newest-cni-033000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/newest-cni-033000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:9e:6b:33:e5:84 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/newest-cni-033000/disk.qcow2
	I1204 15:47:31.896208   12214 main.go:141] libmachine: STDOUT: 
	I1204 15:47:31.896225   12214 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:47:31.896254   12214 fix.go:56] duration metric: took 13.360625ms for fixHost
	I1204 15:47:31.896259   12214 start.go:83] releasing machines lock for "newest-cni-033000", held for 13.376708ms
	W1204 15:47:31.896265   12214 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 15:47:31.896312   12214 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:47:31.896316   12214 start.go:729] Will try again in 5 seconds ...
	I1204 15:47:36.898672   12214 start.go:360] acquireMachinesLock for newest-cni-033000: {Name:mkab5eeb7049d26d12029fc21a1411c94c7c7493 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1204 15:47:36.899235   12214 start.go:364] duration metric: took 453.333µs to acquireMachinesLock for "newest-cni-033000"
	I1204 15:47:36.899367   12214 start.go:96] Skipping create...Using existing machine configuration
	I1204 15:47:36.899388   12214 fix.go:54] fixHost starting: 
	I1204 15:47:36.900097   12214 fix.go:112] recreateIfNeeded on newest-cni-033000: state=Stopped err=<nil>
	W1204 15:47:36.900123   12214 fix.go:138] unexpected machine state, will restart: <nil>
	I1204 15:47:36.909547   12214 out.go:177] * Restarting existing qemu2 VM for "newest-cni-033000" ...
	I1204 15:47:36.913598   12214 qemu.go:418] Using hvf for hardware acceleration
	I1204 15:47:36.913819   12214 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/newest-cni-033000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/20045-6982/.minikube/machines/newest-cni-033000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/newest-cni-033000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:9e:6b:33:e5:84 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/20045-6982/.minikube/machines/newest-cni-033000/disk.qcow2
	I1204 15:47:36.923747   12214 main.go:141] libmachine: STDOUT: 
	I1204 15:47:36.923805   12214 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1204 15:47:36.923888   12214 fix.go:56] duration metric: took 24.503042ms for fixHost
	I1204 15:47:36.923908   12214 start.go:83] releasing machines lock for "newest-cni-033000", held for 24.652ms
	W1204 15:47:36.924044   12214 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-033000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-033000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1204 15:47:36.931550   12214 out.go:201] 
	W1204 15:47:36.934553   12214 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1204 15:47:36.934667   12214 out.go:270] * 
	* 
	W1204 15:47:36.937419   12214 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 15:47:36.944521   12214 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-033000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-033000 -n newest-cni-033000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-033000 -n newest-cni-033000: exit status 7 (75.847083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-033000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-033000 image list --format=json
start_stop_delete_test.go:304: v1.31.2 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.2",
- 	"registry.k8s.io/kube-controller-manager:v1.31.2",
- 	"registry.k8s.io/kube-proxy:v1.31.2",
- 	"registry.k8s.io/kube-scheduler:v1.31.2",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-033000 -n newest-cni-033000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-033000 -n newest-cni-033000: exit status 7 (34.534625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-033000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-033000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-033000 --alsologtostderr -v=1: exit status 83 (46.806ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-033000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-033000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:47:37.149300   12228 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:47:37.149488   12228 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:47:37.149491   12228 out.go:358] Setting ErrFile to fd 2...
	I1204 15:47:37.149494   12228 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:47:37.149624   12228 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:47:37.149855   12228 out.go:352] Setting JSON to false
	I1204 15:47:37.149862   12228 mustload.go:65] Loading cluster: newest-cni-033000
	I1204 15:47:37.150088   12228 config.go:182] Loaded profile config "newest-cni-033000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:47:37.154641   12228 out.go:177] * The control-plane node newest-cni-033000 host is not running: state=Stopped
	I1204 15:47:37.158610   12228 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-033000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-033000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-033000 -n newest-cni-033000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-033000 -n newest-cni-033000: exit status 7 (35.570417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-033000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-033000 -n newest-cni-033000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-033000 -n newest-cni-033000: exit status 7 (34.656875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-033000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.12s)

                                                
                                    

Test pass (80/258)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.11
12 TestDownloadOnly/v1.31.2/json-events 9.87
13 TestDownloadOnly/v1.31.2/preload-exists 0
16 TestDownloadOnly/v1.31.2/kubectl 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.09
18 TestDownloadOnly/v1.31.2/DeleteAll 0.12
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.11
21 TestBinaryMirror 0.3
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
35 TestHyperKitDriverInstallOrUpdate 11.33
39 TestErrorSpam/start 0.4
40 TestErrorSpam/status 0.11
41 TestErrorSpam/pause 0.14
42 TestErrorSpam/unpause 0.13
43 TestErrorSpam/stop 11.02
46 TestFunctional/serial/CopySyncFile 0
48 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/CacheCmd/cache/add_remote 1.96
55 TestFunctional/serial/CacheCmd/cache/add_local 1.06
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
57 TestFunctional/serial/CacheCmd/cache/list 0.04
60 TestFunctional/serial/CacheCmd/cache/delete 0.08
69 TestFunctional/parallel/ConfigCmd 0.24
71 TestFunctional/parallel/DryRun 0.28
72 TestFunctional/parallel/InternationalLanguage 0.12
78 TestFunctional/parallel/AddonsCmd 0.11
93 TestFunctional/parallel/License 0.28
94 TestFunctional/parallel/Version/short 0.04
101 TestFunctional/parallel/ImageCommands/Setup 1.89
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
122 TestFunctional/parallel/ImageCommands/ImageRemove 0.08
124 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.08
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.1
126 TestFunctional/parallel/ProfileCmd/profile_list 0.09
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.09
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.17
135 TestFunctional/delete_echo-server_images 0.07
136 TestFunctional/delete_my-image_image 0.02
137 TestFunctional/delete_minikube_cached_images 0.02
166 TestJSONOutput/start/Audit 0
168 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/pause/Audit 0
174 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/unpause/Audit 0
180 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/stop/Command 3.64
184 TestJSONOutput/stop/Audit 0
186 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
188 TestErrorJSONOutput 0.21
193 TestMainNoArgs 0.04
240 TestStoppedBinaryUpgrade/Setup 1.08
252 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
256 TestNoKubernetes/serial/VerifyK8sNotRunning 0.05
257 TestNoKubernetes/serial/ProfileList 31.21
258 TestNoKubernetes/serial/Stop 3.45
260 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.05
272 TestStoppedBinaryUpgrade/MinikubeLogs 0.65
275 TestStartStop/group/old-k8s-version/serial/Stop 2.1
276 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.13
286 TestStartStop/group/no-preload/serial/Stop 1.83
287 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
299 TestStartStop/group/embed-certs/serial/Stop 3.86
302 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.4
303 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.13
305 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.13
317 TestStartStop/group/newest-cni/serial/DeployApp 0
318 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
319 TestStartStop/group/newest-cni/serial/Stop 3.44
320 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
322 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
323 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1204 15:21:19.595201    7495 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I1204 15:21:19.595588    7495 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-447000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-447000: exit status 85 (104.018709ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-447000 | jenkins | v1.34.0 | 04 Dec 24 15:20 PST |          |
	|         | -p download-only-447000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 15:20:55
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.2 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 15:20:55.041173    7496 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:20:55.041346    7496 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:20:55.041349    7496 out.go:358] Setting ErrFile to fd 2...
	I1204 15:20:55.041352    7496 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:20:55.041486    7496 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	W1204 15:20:55.041581    7496 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/20045-6982/.minikube/config/config.json: open /Users/jenkins/minikube-integration/20045-6982/.minikube/config/config.json: no such file or directory
	I1204 15:20:55.042951    7496 out.go:352] Setting JSON to true
	I1204 15:20:55.061638    7496 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4825,"bootTime":1733349630,"procs":550,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 15:20:55.061711    7496 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:20:55.066885    7496 out.go:97] [download-only-447000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 15:20:55.067048    7496 notify.go:220] Checking for updates...
	W1204 15:20:55.067058    7496 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball: no such file or directory
	I1204 15:20:55.070874    7496 out.go:169] MINIKUBE_LOCATION=20045
	I1204 15:20:55.073889    7496 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	I1204 15:20:55.078880    7496 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 15:20:55.081916    7496 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:20:55.085896    7496 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	W1204 15:20:55.091838    7496 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1204 15:20:55.092100    7496 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:20:55.095838    7496 out.go:97] Using the qemu2 driver based on user configuration
	I1204 15:20:55.095856    7496 start.go:297] selected driver: qemu2
	I1204 15:20:55.095871    7496 start.go:901] validating driver "qemu2" against <nil>
	I1204 15:20:55.095956    7496 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 15:20:55.098788    7496 out.go:169] Automatically selected the socket_vmnet network
	I1204 15:20:55.105459    7496 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1204 15:20:55.105562    7496 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1204 15:20:55.105598    7496 cni.go:84] Creating CNI manager for ""
	I1204 15:20:55.105641    7496 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1204 15:20:55.105702    7496 start.go:340] cluster config:
	{Name:download-only-447000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-447000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:20:55.110468    7496 iso.go:125] acquiring lock: {Name:mk798ab7885b095925d55c843b0600e7ea181045 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:20:55.113811    7496 out.go:97] Downloading VM boot image ...
	I1204 15:20:55.113827    7496 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/iso/arm64/minikube-v1.34.0-1730913550-19917-arm64.iso
	I1204 15:21:04.342769    7496 out.go:97] Starting "download-only-447000" primary control-plane node in "download-only-447000" cluster
	I1204 15:21:04.342789    7496 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1204 15:21:04.404970    7496 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1204 15:21:04.404981    7496 cache.go:56] Caching tarball of preloaded images
	I1204 15:21:04.405207    7496 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1204 15:21:04.411345    7496 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1204 15:21:04.411351    7496 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1204 15:21:04.490629    7496 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1204 15:21:18.339509    7496 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1204 15:21:18.339702    7496 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1204 15:21:19.034444    7496 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1204 15:21:19.034655    7496 profile.go:143] Saving config to /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/download-only-447000/config.json ...
	I1204 15:21:19.034674    7496 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/20045-6982/.minikube/profiles/download-only-447000/config.json: {Name:mk88581c4ef10dbfffc45249df0539ce117cf9df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 15:21:19.034948    7496 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1204 15:21:19.035205    7496 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I1204 15:21:19.545370    7496 out.go:193] 
	W1204 15:21:19.550333    7496 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/20045-6982/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x109434320 0x109434320 0x109434320 0x109434320 0x109434320 0x109434320 0x109434320] Decompressors:map[bz2:0x14000797ec0 gz:0x14000797ec8 tar:0x14000797e30 tar.bz2:0x14000797e50 tar.gz:0x14000797e60 tar.xz:0x14000797e70 tar.zst:0x14000797eb0 tbz2:0x14000797e50 tgz:0x14000797e60 txz:0x14000797e70 tzst:0x14000797eb0 xz:0x14000797f10 zip:0x14000797f20 zst:0x14000797f18] Getters:map[file:0x14000bca800 http:0x140009080a0 https:0x140009080f0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1204 15:21:19.550360    7496 out_reason.go:110] 
	W1204 15:21:19.558354    7496 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1204 15:21:19.562171    7496 out.go:193] 
	
	
	* The control-plane node download-only-447000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-447000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-447000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (9.87s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-914000 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-914000 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=docker --driver=qemu2 : (9.871692583s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (9.87s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1204 15:21:29.856543    7495 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
I1204 15:21:29.856601    7495 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
--- PASS: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-914000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-914000: exit status 85 (89.886125ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-447000 | jenkins | v1.34.0 | 04 Dec 24 15:20 PST |                     |
	|         | -p download-only-447000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 04 Dec 24 15:21 PST | 04 Dec 24 15:21 PST |
	| delete  | -p download-only-447000        | download-only-447000 | jenkins | v1.34.0 | 04 Dec 24 15:21 PST | 04 Dec 24 15:21 PST |
	| start   | -o=json --download-only        | download-only-914000 | jenkins | v1.34.0 | 04 Dec 24 15:21 PST |                     |
	|         | -p download-only-914000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 15:21:20
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.2 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 15:21:20.016269    7527 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:21:20.016425    7527 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:21:20.016428    7527 out.go:358] Setting ErrFile to fd 2...
	I1204 15:21:20.016430    7527 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:21:20.016563    7527 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:21:20.017647    7527 out.go:352] Setting JSON to true
	I1204 15:21:20.035521    7527 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4850,"bootTime":1733349630,"procs":548,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 15:21:20.035603    7527 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:21:20.039982    7527 out.go:97] [download-only-914000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 15:21:20.040063    7527 notify.go:220] Checking for updates...
	I1204 15:21:20.043934    7527 out.go:169] MINIKUBE_LOCATION=20045
	I1204 15:21:20.046959    7527 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	I1204 15:21:20.049853    7527 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 15:21:20.052949    7527 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:21:20.056024    7527 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	W1204 15:21:20.060930    7527 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1204 15:21:20.061070    7527 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:21:20.063986    7527 out.go:97] Using the qemu2 driver based on user configuration
	I1204 15:21:20.063995    7527 start.go:297] selected driver: qemu2
	I1204 15:21:20.063999    7527 start.go:901] validating driver "qemu2" against <nil>
	I1204 15:21:20.064048    7527 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 15:21:20.066918    7527 out.go:169] Automatically selected the socket_vmnet network
	I1204 15:21:20.072214    7527 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1204 15:21:20.072318    7527 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1204 15:21:20.072339    7527 cni.go:84] Creating CNI manager for ""
	I1204 15:21:20.072367    7527 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1204 15:21:20.072377    7527 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1204 15:21:20.072419    7527 start.go:340] cluster config:
	{Name:download-only-914000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:download-only-914000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:21:20.076817    7527 iso.go:125] acquiring lock: {Name:mk798ab7885b095925d55c843b0600e7ea181045 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 15:21:20.079967    7527 out.go:97] Starting "download-only-914000" primary control-plane node in "download-only-914000" cluster
	I1204 15:21:20.079976    7527 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:21:20.160389    7527 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	I1204 15:21:20.160407    7527 cache.go:56] Caching tarball of preloaded images
	I1204 15:21:20.160670    7527 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime docker
	I1204 15:21:20.164736    7527 out.go:97] Downloading Kubernetes v1.31.2 preload ...
	I1204 15:21:20.164745    7527 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4 ...
	I1204 15:21:20.247395    7527 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4?checksum=md5:5f3d7369b12f47138aa2863bb7bda916 -> /Users/jenkins/minikube-integration/20045-6982/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-914000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-914000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-914000
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestBinaryMirror (0.3s)

                                                
                                                
=== RUN   TestBinaryMirror
I1204 15:21:30.404126    7495 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/darwin/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-489000 --alsologtostderr --binary-mirror http://127.0.0.1:61364 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-489000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-489000
--- PASS: TestBinaryMirror (0.30s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-057000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-057000: exit status 85 (61.172375ms)

                                                
                                                
-- stdout --
	* Profile "addons-057000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-057000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-057000
addons_test.go:950: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-057000: exit status 85 (65.047584ms)

                                                
                                                
-- stdout --
	* Profile "addons-057000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-057000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (11.33s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
I1204 15:32:55.131978    7495 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1204 15:32:55.132138    7495 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-without-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
W1204 15:32:57.120388    7495 install.go:62] docker-machine-driver-hyperkit: exit status 1
W1204 15:32:57.120589    7495 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1204 15:32:57.120637    7495 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2448553324/001/docker-machine-driver-hyperkit
I1204 15:32:57.628772    7495 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2448553324/001/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x1074e56e0 0x1074e56e0 0x1074e56e0 0x1074e56e0 0x1074e56e0 0x1074e56e0 0x1074e56e0] Decompressors:map[bz2:0x14000907140 gz:0x14000907148 tar:0x140009070e0 tar.bz2:0x140009070f0 tar.gz:0x14000907100 tar.xz:0x14000907110 tar.zst:0x14000907130 tbz2:0x140009070f0 tgz:0x14000907100 txz:0x14000907110 tzst:0x14000907130 xz:0x14000907160 zip:0x14000907170 zst:0x14000907168] Getters:map[file:0x1400078f730 http:0x14000c95360 https:0x14000c953b0] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1204 15:32:57.628901    7495 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate2448553324/001/docker-machine-driver-hyperkit
--- PASS: TestHyperKitDriverInstallOrUpdate (11.33s)

                                                
                                    
x
+
TestErrorSpam/start (0.4s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-875000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-875000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-875000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000 start --dry-run
--- PASS: TestErrorSpam/start (0.40s)

                                                
                                    
x
+
TestErrorSpam/status (0.11s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-875000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-875000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000 status: exit status 7 (36.416083ms)

                                                
                                                
-- stdout --
	nospam-875000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-875000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-875000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-875000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000 status: exit status 7 (34.701916ms)

                                                
                                                
-- stdout --
	nospam-875000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-875000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-875000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-875000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000 status: exit status 7 (35.037292ms)

                                                
                                                
-- stdout --
	nospam-875000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-875000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.11s)

                                                
                                    
x
+
TestErrorSpam/pause (0.14s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-875000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-875000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000 pause: exit status 83 (43.30625ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-875000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-875000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-875000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-875000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-875000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000 pause: exit status 83 (51.49875ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-875000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-875000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-875000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-875000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-875000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000 pause: exit status 83 (43.752959ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-875000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-875000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-875000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.14s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.13s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-875000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-875000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000 unpause: exit status 83 (44.765583ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-875000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-875000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-875000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-875000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-875000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000 unpause: exit status 83 (43.893916ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-875000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-875000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-875000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-875000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-875000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000 unpause: exit status 83 (44.926041ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-875000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-875000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-875000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.13s)

                                                
                                    
x
+
TestErrorSpam/stop (11.02s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-875000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-875000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000 stop: (3.644005333s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-875000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-875000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000 stop: (3.775948917s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-875000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-875000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-875000 stop: (3.601848292s)
--- PASS: TestErrorSpam/stop (11.02s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/20045-6982/.minikube/files/etc/test/nested/copy/7495/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (1.96s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (1.96s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-014000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local1018637204/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 cache add minikube-local-cache-test:functional-014000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 cache delete minikube-local-cache-test:functional-014000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-014000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 config get cpus: exit status 14 (35.978667ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 config get cpus: exit status 14 (35.142458ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-014000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-014000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (166.813375ms)

                                                
                                                
-- stdout --
	* [functional-014000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:23:09.637002    8115 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:23:09.637193    8115 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:23:09.637197    8115 out.go:358] Setting ErrFile to fd 2...
	I1204 15:23:09.637200    8115 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:23:09.637356    8115 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:23:09.638714    8115 out.go:352] Setting JSON to false
	I1204 15:23:09.658944    8115 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4959,"bootTime":1733349630,"procs":557,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 15:23:09.659016    8115 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:23:09.663710    8115 out.go:177] * [functional-014000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1204 15:23:09.670721    8115 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 15:23:09.670747    8115 notify.go:220] Checking for updates...
	I1204 15:23:09.678616    8115 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	I1204 15:23:09.681686    8115 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 15:23:09.684595    8115 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:23:09.687647    8115 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	I1204 15:23:09.690710    8115 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 15:23:09.694071    8115 config.go:182] Loaded profile config "functional-014000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:23:09.694382    8115 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:23:09.698628    8115 out.go:177] * Using the qemu2 driver based on existing profile
	I1204 15:23:09.705615    8115 start.go:297] selected driver: qemu2
	I1204 15:23:09.705621    8115 start.go:901] validating driver "qemu2" against &{Name:functional-014000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:functional-014000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:23:09.705672    8115 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 15:23:09.712684    8115 out.go:201] 
	W1204 15:23:09.716495    8115 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1204 15:23:09.720669    8115 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-014000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-014000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-014000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (114.851209ms)

                                                
                                                
-- stdout --
	* [functional-014000] minikube v1.34.0 sur Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 15:23:09.872517    8126 out.go:345] Setting OutFile to fd 1 ...
	I1204 15:23:09.872673    8126 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:23:09.872676    8126 out.go:358] Setting ErrFile to fd 2...
	I1204 15:23:09.872679    8126 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 15:23:09.872802    8126 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/20045-6982/.minikube/bin
	I1204 15:23:09.874306    8126 out.go:352] Setting JSON to false
	I1204 15:23:09.892777    8126 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4959,"bootTime":1733349630,"procs":557,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1204 15:23:09.892857    8126 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1204 15:23:09.896723    8126 out.go:177] * [functional-014000] minikube v1.34.0 sur Darwin 15.0.1 (arm64)
	I1204 15:23:09.903641    8126 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 15:23:09.903740    8126 notify.go:220] Checking for updates...
	I1204 15:23:09.910617    8126 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	I1204 15:23:09.913583    8126 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1204 15:23:09.916632    8126 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 15:23:09.919727    8126 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	I1204 15:23:09.922653    8126 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 15:23:09.925972    8126 config.go:182] Loaded profile config "functional-014000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.2
	I1204 15:23:09.926233    8126 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 15:23:09.930635    8126 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I1204 15:23:09.937658    8126 start.go:297] selected driver: qemu2
	I1204 15:23:09.937667    8126 start.go:901] validating driver "qemu2" against &{Name:functional-014000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.2 ClusterName:functional-014000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 15:23:09.937732    8126 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 15:23:09.944675    8126 out.go:201] 
	W1204 15:23:09.948613    8126 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1204 15:23:09.951627    8126 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
--- PASS: TestFunctional/parallel/License (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.862397625s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-014000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-014000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 image rm kicbase/echo-server:functional-014000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-014000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 image save --daemon kicbase/echo-server:functional-014000 --alsologtostderr
I1204 15:22:33.367517    7495 retry.go:31] will retry after 6.268580036s: Temporary Error: Get "http:": http: no Host in request URL
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-014000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "54.672708ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "39.07275ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "51.967167ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "39.809833ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.0143565s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-014000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.17s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-014000
--- PASS: TestFunctional/delete_echo-server_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-014000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-014000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.64s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-118000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-118000 --output=json --user=testUser: (3.636951542s)
--- PASS: TestJSONOutput/stop/Command (3.64s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-554000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-554000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (99.9865ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a845eef5-4455-452b-bfc5-f0a1b11a5dfe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-554000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9d932ab1-273d-4eb1-b184-06f0ce1bb5bb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20045"}}
	{"specversion":"1.0","id":"e0fff823-07fe-4cd4-b218-632462baf710","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig"}}
	{"specversion":"1.0","id":"4d771222-31d6-4ab6-bfb4-3cba212ae7fd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"0e684abe-86f8-4b98-a345-3223f74c85be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e7c2b99b-0705-4cb7-a7da-756bcece6c8d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube"}}
	{"specversion":"1.0","id":"db1c627a-d2c9-4750-beae-cfb8ba29b0e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"83ac67c7-0328-401b-a4cf-4bea28f52563","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-554000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-554000
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.08s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-750000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-750000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (107.779459ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-750000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=20045
	  - KUBECONFIG=/Users/jenkins/minikube-integration/20045-6982/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/20045-6982/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-750000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-750000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (47.640292ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-750000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.615399125s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.598238583s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-750000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-750000: (3.448213834s)
--- PASS: TestNoKubernetes/serial/Stop (3.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-750000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-750000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (47.722542ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-750000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-750000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.65s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-377000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-105000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-105000 --alsologtostderr -v=3: (2.096282167s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-105000 -n old-k8s-version-105000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-105000 -n old-k8s-version-105000: exit status 7 (56.324292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-105000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (1.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-756000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-756000 --alsologtostderr -v=3: (1.833076583s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (1.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-756000 -n no-preload-756000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-756000 -n no-preload-756000: exit status 7 (60.692084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-756000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-617000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-617000 --alsologtostderr -v=3: (3.864238708s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-845000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-845000 --alsologtostderr -v=3: (3.3964905s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-617000 -n embed-certs-617000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-617000 -n embed-certs-617000: exit status 7 (58.729667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-617000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-845000 -n default-k8s-diff-port-845000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-845000 -n default-k8s-diff-port-845000: exit status 7 (63.304125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-845000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-033000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-033000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-033000 --alsologtostderr -v=3: (3.435428375s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-033000 -n newest-cni-033000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-033000 -n newest-cni-033000: exit status 7 (62.327584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-033000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (22/258)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-014000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2619495272/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1733354554110187000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2619495272/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1733354554110187000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2619495272/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1733354554110187000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2619495272/001/test-1733354554110187000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (60.812083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
I1204 15:22:34.171524    7495 retry.go:31] will retry after 325.123273ms: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.170125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
I1204 15:22:34.590179    7495 retry.go:31] will retry after 727.163678ms: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.851625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
I1204 15:22:35.411627    7495 retry.go:31] will retry after 1.098803768s: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (92.880084ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
I1204 15:22:36.605809    7495 retry.go:31] will retry after 1.785085339s: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.905459ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
I1204 15:22:38.485245    7495 retry.go:31] will retry after 3.752191195s: exit status 83
I1204 15:22:39.638490    7495 retry.go:31] will retry after 6.30853687s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.460833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
I1204 15:22:42.331418    7495 retry.go:31] will retry after 2.972375571s: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.820125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 ssh "sudo umount -f /mount-9p": exit status 83 (49.730167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-014000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-014000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2619495272/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (11.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (11.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-014000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1612911562/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (69.838083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
I1204 15:22:45.638233    7495 retry.go:31] will retry after 274.34517ms: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 ssh "findmnt -T /mount-9p | grep 9p"
I1204 15:22:45.948241    7495 retry.go:31] will retry after 12.711634247s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.26475ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
I1204 15:22:46.006207    7495 retry.go:31] will retry after 481.692763ms: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (92.1305ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
I1204 15:22:46.582375    7495 retry.go:31] will retry after 1.25504407s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (94.685625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
I1204 15:22:47.934588    7495 retry.go:31] will retry after 1.795763029s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.072458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
I1204 15:22:49.821801    7495 retry.go:31] will retry after 3.793446951s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.144042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
I1204 15:22:53.704817    7495 retry.go:31] will retry after 3.248845361s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.574959ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 ssh "sudo umount -f /mount-9p": exit status 83 (49.709416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-014000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-014000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port1612911562/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (11.65s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (12.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-014000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup595641423/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-014000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup595641423/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-014000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup595641423/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 ssh "findmnt -T" /mount1: exit status 83 (81.597ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
I1204 15:22:57.300639    7495 retry.go:31] will retry after 589.715156ms: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 ssh "findmnt -T" /mount1: exit status 83 (87.887125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
I1204 15:22:57.980559    7495 retry.go:31] will retry after 1.112233183s: exit status 83
I1204 15:22:58.662357    7495 retry.go:31] will retry after 14.370110946s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 ssh "findmnt -T" /mount1: exit status 83 (91.418209ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
I1204 15:22:59.186583    7495 retry.go:31] will retry after 1.316374963s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 ssh "findmnt -T" /mount1: exit status 83 (91.016125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
I1204 15:23:00.596305    7495 retry.go:31] will retry after 2.116866417s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 ssh "findmnt -T" /mount1: exit status 83 (95.917166ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
I1204 15:23:02.811484    7495 retry.go:31] will retry after 3.323665292s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 ssh "findmnt -T" /mount1: exit status 83 (90.13975ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
I1204 15:23:06.227828    7495 retry.go:31] will retry after 2.854515109s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-014000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-014000 ssh "findmnt -T" /mount1: exit status 83 (90.423375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-014000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-014000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-014000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup595641423/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-014000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup595641423/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-014000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup595641423/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (12.35s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-667000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-667000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-667000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-667000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-667000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-667000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-667000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-667000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-667000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-667000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-667000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-667000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-667000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-667000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-667000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-667000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-667000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-667000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-667000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-667000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-667000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-667000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-667000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-667000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-667000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-667000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-667000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-667000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-667000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-667000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-667000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-667000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-667000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-667000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-667000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-667000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-667000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-667000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-667000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-667000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-667000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-667000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-667000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-667000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-667000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-667000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-667000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-667000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-667000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-667000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-667000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-667000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-667000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-667000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-667000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-667000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-667000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-667000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-667000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-667000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-667000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-667000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-667000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-667000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-667000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667000"

                                                
                                                
----------------------- debugLogs end: cilium-667000 [took: 2.374809125s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-667000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-667000
--- SKIP: TestNetworkPlugins/group/cilium (2.50s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-702000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-702000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.12s)

                                                
                                    
Copied to clipboard