Test Report: QEMU_macOS 18485

                    
                      bdd124d1e5a6e86e5bd4f9e512befe1eefe531bd:2024-03-27:33775
                    
                

Test fail (156/266)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 39.14
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 10.02
36 TestAddons/Setup 10.31
37 TestCertOptions 10.15
38 TestCertExpiration 195.45
39 TestDockerFlags 10.23
40 TestForceSystemdFlag 10.37
41 TestForceSystemdEnv 10.15
47 TestErrorSpam/setup 9.79
56 TestFunctional/serial/StartWithProxy 9.88
58 TestFunctional/serial/SoftStart 5.26
59 TestFunctional/serial/KubeContext 0.06
60 TestFunctional/serial/KubectlGetPods 0.06
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.04
68 TestFunctional/serial/CacheCmd/cache/cache_reload 0.16
70 TestFunctional/serial/MinikubeKubectlCmd 0.69
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.77
72 TestFunctional/serial/ExtraConfig 5.26
73 TestFunctional/serial/ComponentHealth 0.06
74 TestFunctional/serial/LogsCmd 0.08
75 TestFunctional/serial/LogsFileCmd 0.07
76 TestFunctional/serial/InvalidService 0.03
79 TestFunctional/parallel/DashboardCmd 0.2
82 TestFunctional/parallel/StatusCmd 0.13
86 TestFunctional/parallel/ServiceCmdConnect 0.14
88 TestFunctional/parallel/PersistentVolumeClaim 0.03
90 TestFunctional/parallel/SSHCmd 0.12
91 TestFunctional/parallel/CpCmd 0.28
93 TestFunctional/parallel/FileSync 0.07
94 TestFunctional/parallel/CertSync 0.3
98 TestFunctional/parallel/NodeLabels 0.06
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.04
104 TestFunctional/parallel/Version/components 0.04
105 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
106 TestFunctional/parallel/ImageCommands/ImageListTable 0.04
107 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
108 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
109 TestFunctional/parallel/ImageCommands/ImageBuild 0.12
111 TestFunctional/parallel/DockerEnv/bash 0.05
112 TestFunctional/parallel/UpdateContextCmd/no_changes 0.04
113 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
114 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.04
115 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
116 TestFunctional/parallel/ServiceCmd/List 0.05
117 TestFunctional/parallel/ServiceCmd/JSONOutput 0.04
118 TestFunctional/parallel/ServiceCmd/HTTPS 0.04
119 TestFunctional/parallel/ServiceCmd/Format 0.04
120 TestFunctional/parallel/ServiceCmd/URL 0.04
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.08
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 108.97
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.31
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.4
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.72
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.08
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.07
142 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 22.22
150 TestMultiControlPlane/serial/StartCluster 9.94
151 TestMultiControlPlane/serial/DeployApp 110.01
152 TestMultiControlPlane/serial/PingHostFromPods 0.09
153 TestMultiControlPlane/serial/AddWorkerNode 0.08
154 TestMultiControlPlane/serial/NodeLabels 0.06
155 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.1
156 TestMultiControlPlane/serial/CopyFile 0.06
157 TestMultiControlPlane/serial/StopSecondaryNode 0.12
158 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.11
159 TestMultiControlPlane/serial/RestartSecondaryNode 54.61
160 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.11
161 TestMultiControlPlane/serial/RestartClusterKeepsNodes 7.33
162 TestMultiControlPlane/serial/DeleteSecondaryNode 0.11
163 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.11
164 TestMultiControlPlane/serial/StopCluster 3.54
165 TestMultiControlPlane/serial/RestartCluster 5.26
166 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.11
167 TestMultiControlPlane/serial/AddSecondaryNode 0.08
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.1
171 TestImageBuild/serial/Setup 9.96
174 TestJSONOutput/start/Command 9.71
180 TestJSONOutput/pause/Command 0.09
186 TestJSONOutput/unpause/Command 0.05
203 TestMinikubeProfile 10.25
206 TestMountStart/serial/StartWithMountFirst 11.13
209 TestMultiNode/serial/FreshStart2Nodes 9.83
210 TestMultiNode/serial/DeployApp2Nodes 107.42
211 TestMultiNode/serial/PingHostFrom2Pods 0.09
212 TestMultiNode/serial/AddNode 0.08
213 TestMultiNode/serial/MultiNodeLabels 0.06
214 TestMultiNode/serial/ProfileList 0.1
215 TestMultiNode/serial/CopyFile 0.06
216 TestMultiNode/serial/StopNode 0.14
217 TestMultiNode/serial/StartAfterStop 44.47
218 TestMultiNode/serial/RestartKeepsNodes 7.33
219 TestMultiNode/serial/DeleteNode 0.11
220 TestMultiNode/serial/StopMultiNode 3.37
221 TestMultiNode/serial/RestartMultiNode 5.25
222 TestMultiNode/serial/ValidateNameConflict 20.11
226 TestPreload 10.13
228 TestScheduledStopUnix 9.93
229 TestSkaffold 16.51
232 TestRunningBinaryUpgrade 620.2
234 TestKubernetesUpgrade 17.5
247 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.52
248 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.18
250 TestStoppedBinaryUpgrade/Upgrade 586.77
252 TestPause/serial/Start 9.83
262 TestNoKubernetes/serial/StartWithK8s 9.86
263 TestNoKubernetes/serial/StartWithStopK8s 5.88
264 TestNoKubernetes/serial/Start 5.9
268 TestNoKubernetes/serial/StartNoArgs 5.93
270 TestNetworkPlugins/group/auto/Start 10.07
271 TestNetworkPlugins/group/kindnet/Start 9.93
272 TestNetworkPlugins/group/flannel/Start 9.92
273 TestNetworkPlugins/group/enable-default-cni/Start 9.77
274 TestNetworkPlugins/group/bridge/Start 9.79
275 TestNetworkPlugins/group/kubenet/Start 9.77
276 TestNetworkPlugins/group/custom-flannel/Start 9.73
277 TestNetworkPlugins/group/calico/Start 9.82
278 TestNetworkPlugins/group/false/Start 10.05
281 TestStartStop/group/old-k8s-version/serial/FirstStart 9.85
282 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
283 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
286 TestStartStop/group/old-k8s-version/serial/SecondStart 5.23
287 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
288 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
289 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
290 TestStartStop/group/old-k8s-version/serial/Pause 0.11
292 TestStartStop/group/no-preload/serial/FirstStart 9.92
294 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 11.55
295 TestStartStop/group/no-preload/serial/DeployApp 0.09
296 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
298 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
299 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.11
302 TestStartStop/group/no-preload/serial/SecondStart 5.27
304 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 7.27
305 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
306 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
307 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
308 TestStartStop/group/no-preload/serial/Pause 0.1
310 TestStartStop/group/newest-cni/serial/FirstStart 9.92
311 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
312 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
313 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
314 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.1
316 TestStartStop/group/embed-certs/serial/FirstStart 10.25
321 TestStartStop/group/newest-cni/serial/SecondStart 5.38
322 TestStartStop/group/embed-certs/serial/DeployApp 0.09
323 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.11
326 TestStartStop/group/embed-certs/serial/SecondStart 5.26
329 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.07
330 TestStartStop/group/newest-cni/serial/Pause 0.1
331 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
332 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
333 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
334 TestStartStop/group/embed-certs/serial/Pause 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (39.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-614000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-614000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (39.142910541s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5f28a898-8231-472f-a8e3-3bc775ccde4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-614000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"854ec6b6-1cc3-4211-9370-e469b534ec75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18485"}}
	{"specversion":"1.0","id":"5b2e86a4-17d0-4556-8563-8efed354202d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig"}}
	{"specversion":"1.0","id":"c2c00685-881e-448b-b487-5fac227e2bb8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"1e15bdd1-9938-4b4f-912c-c046f3ab3d09","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"33621ed1-2436-460e-8c6d-8ee3c6b6a5e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube"}}
	{"specversion":"1.0","id":"36018207-8a7a-43ab-a15f-cbabce556ca8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"0adad81d-c90f-4ab4-b519-dcae452ab415","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"036a8286-2e70-4bab-a99e-493139a2b460","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"9c4f5b5b-3c7d-43e8-88aa-7cd22b82569c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"b5e48d48-5207-46e5-a5d2-cb339488b015","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-614000\" primary control-plane node in \"download-only-614000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"d628315f-ffbc-4e56-8abb-525a8bd47bb0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f651cbd5-c076-456d-981a-af010b39a3a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18485-6511/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108bf3220 0x108bf3220 0x108bf3220 0x108bf3220 0x108bf3220 0x108bf3220 0x108bf3220] Decompressors:map[bz2:0x140007cfb10 gz:0x140007cfb18 tar:0x140007cfac0 tar.bz2:0x140007cfad0 tar.gz:0x140007cfae0 tar.xz:0x140007cfaf0 tar.zst:0x140007cfb00 tbz2:0x140007cfad0 tgz:0x14
0007cfae0 txz:0x140007cfaf0 tzst:0x140007cfb00 xz:0x140007cfb20 zip:0x140007cfb30 zst:0x140007cfb28] Getters:map[file:0x14002188560 http:0x140008c2960 https:0x140008c29b0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"bd18ef7c-3eee-4435-b331-79c5e07c40fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:26:18.627892    6928 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:26:18.628039    6928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:26:18.628042    6928 out.go:304] Setting ErrFile to fd 2...
	I0327 16:26:18.628045    6928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:26:18.628166    6928 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	W0327 16:26:18.628250    6928 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18485-6511/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18485-6511/.minikube/config/config.json: no such file or directory
	I0327 16:26:18.629493    6928 out.go:298] Setting JSON to true
	I0327 16:26:18.647943    6928 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5149,"bootTime":1711576829,"procs":486,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 16:26:18.648010    6928 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 16:26:18.653262    6928 out.go:97] [download-only-614000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 16:26:18.656422    6928 out.go:169] MINIKUBE_LOCATION=18485
	I0327 16:26:18.653417    6928 notify.go:220] Checking for updates...
	W0327 16:26:18.653469    6928 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball: no such file or directory
	I0327 16:26:18.664289    6928 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:26:18.667435    6928 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 16:26:18.670452    6928 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 16:26:18.673467    6928 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	W0327 16:26:18.679407    6928 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0327 16:26:18.679589    6928 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 16:26:18.684424    6928 out.go:97] Using the qemu2 driver based on user configuration
	I0327 16:26:18.684449    6928 start.go:297] selected driver: qemu2
	I0327 16:26:18.684452    6928 start.go:901] validating driver "qemu2" against <nil>
	I0327 16:26:18.684512    6928 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 16:26:18.688397    6928 out.go:169] Automatically selected the socket_vmnet network
	I0327 16:26:18.693958    6928 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0327 16:26:18.694055    6928 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0327 16:26:18.694118    6928 cni.go:84] Creating CNI manager for ""
	I0327 16:26:18.694133    6928 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0327 16:26:18.694177    6928 start.go:340] cluster config:
	{Name:download-only-614000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-614000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:26:18.699722    6928 iso.go:125] acquiring lock: {Name:mk26a7f41e004242919e6d5076ec0f9645f820ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:26:18.702490    6928 out.go:97] Downloading VM boot image ...
	I0327 16:26:18.702526    6928 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso
	I0327 16:26:36.232212    6928 out.go:97] Starting "download-only-614000" primary control-plane node in "download-only-614000" cluster
	I0327 16:26:36.232249    6928 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0327 16:26:36.532427    6928 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0327 16:26:36.532512    6928 cache.go:56] Caching tarball of preloaded images
	I0327 16:26:36.533315    6928 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0327 16:26:36.538877    6928 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0327 16:26:36.538903    6928 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0327 16:26:37.138723    6928 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0327 16:26:56.642228    6928 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0327 16:26:56.642386    6928 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0327 16:26:57.340177    6928 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0327 16:26:57.340380    6928 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/download-only-614000/config.json ...
	I0327 16:26:57.340398    6928 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/download-only-614000/config.json: {Name:mke2e2a697368fdeba8c536035210c569c1c16cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 16:26:57.340635    6928 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0327 16:26:57.340822    6928 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0327 16:26:57.691866    6928 out.go:169] 
	W0327 16:26:57.696017    6928 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18485-6511/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108bf3220 0x108bf3220 0x108bf3220 0x108bf3220 0x108bf3220 0x108bf3220 0x108bf3220] Decompressors:map[bz2:0x140007cfb10 gz:0x140007cfb18 tar:0x140007cfac0 tar.bz2:0x140007cfad0 tar.gz:0x140007cfae0 tar.xz:0x140007cfaf0 tar.zst:0x140007cfb00 tbz2:0x140007cfad0 tgz:0x140007cfae0 txz:0x140007cfaf0 tzst:0x140007cfb00 xz:0x140007cfb20 zip:0x140007cfb30 zst:0x140007cfb28] Getters:map[file:0x14002188560 http:0x140008c2960 https:0x140008c29b0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0327 16:26:57.696044    6928 out_reason.go:110] 
	W0327 16:26:57.702936    6928 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 16:26:57.706900    6928 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-614000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (39.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/18485-6511/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.02s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-189000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-189000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.842517541s)

                                                
                                                
-- stdout --
	* [offline-docker-189000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-189000" primary control-plane node in "offline-docker-189000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-189000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:39:18.741587    8497 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:39:18.741743    8497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:39:18.741746    8497 out.go:304] Setting ErrFile to fd 2...
	I0327 16:39:18.741748    8497 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:39:18.741870    8497 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:39:18.743034    8497 out.go:298] Setting JSON to false
	I0327 16:39:18.760723    8497 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5929,"bootTime":1711576829,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 16:39:18.760795    8497 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 16:39:18.765953    8497 out.go:177] * [offline-docker-189000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 16:39:18.772948    8497 out.go:177]   - MINIKUBE_LOCATION=18485
	I0327 16:39:18.772948    8497 notify.go:220] Checking for updates...
	I0327 16:39:18.780828    8497 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:39:18.783893    8497 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 16:39:18.786898    8497 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 16:39:18.789814    8497 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	I0327 16:39:18.792881    8497 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 16:39:18.796203    8497 config.go:182] Loaded profile config "multinode-266000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:39:18.796254    8497 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 16:39:18.799841    8497 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 16:39:18.806978    8497 start.go:297] selected driver: qemu2
	I0327 16:39:18.806987    8497 start.go:901] validating driver "qemu2" against <nil>
	I0327 16:39:18.806994    8497 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 16:39:18.809113    8497 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 16:39:18.811814    8497 out.go:177] * Automatically selected the socket_vmnet network
	I0327 16:39:18.814918    8497 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 16:39:18.814984    8497 cni.go:84] Creating CNI manager for ""
	I0327 16:39:18.814990    8497 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 16:39:18.814993    8497 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 16:39:18.815025    8497 start.go:340] cluster config:
	{Name:offline-docker-189000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:offline-docker-189000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:39:18.819618    8497 iso.go:125] acquiring lock: {Name:mk26a7f41e004242919e6d5076ec0f9645f820ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:39:18.824861    8497 out.go:177] * Starting "offline-docker-189000" primary control-plane node in "offline-docker-189000" cluster
	I0327 16:39:18.828862    8497 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 16:39:18.828894    8497 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 16:39:18.828910    8497 cache.go:56] Caching tarball of preloaded images
	I0327 16:39:18.828982    8497 preload.go:173] Found /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 16:39:18.828987    8497 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 16:39:18.829046    8497 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/offline-docker-189000/config.json ...
	I0327 16:39:18.829056    8497 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/offline-docker-189000/config.json: {Name:mke02a436c835a85ae4daca7da066bae6ad71b6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 16:39:18.829292    8497 start.go:360] acquireMachinesLock for offline-docker-189000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:39:18.829325    8497 start.go:364] duration metric: took 22.291µs to acquireMachinesLock for "offline-docker-189000"
	I0327 16:39:18.829335    8497 start.go:93] Provisioning new machine with config: &{Name:offline-docker-189000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.29.3 ClusterName:offline-docker-189000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 16:39:18.829377    8497 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 16:39:18.837881    8497 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0327 16:39:18.853208    8497 start.go:159] libmachine.API.Create for "offline-docker-189000" (driver="qemu2")
	I0327 16:39:18.853239    8497 client.go:168] LocalClient.Create starting
	I0327 16:39:18.853327    8497 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem
	I0327 16:39:18.853358    8497 main.go:141] libmachine: Decoding PEM data...
	I0327 16:39:18.853366    8497 main.go:141] libmachine: Parsing certificate...
	I0327 16:39:18.853411    8497 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem
	I0327 16:39:18.853432    8497 main.go:141] libmachine: Decoding PEM data...
	I0327 16:39:18.853444    8497 main.go:141] libmachine: Parsing certificate...
	I0327 16:39:18.853820    8497 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18485-6511/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0327 16:39:18.990622    8497 main.go:141] libmachine: Creating SSH key...
	I0327 16:39:19.152353    8497 main.go:141] libmachine: Creating Disk image...
	I0327 16:39:19.152372    8497 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 16:39:19.152579    8497 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/offline-docker-189000/disk.qcow2.raw /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/offline-docker-189000/disk.qcow2
	I0327 16:39:19.165441    8497 main.go:141] libmachine: STDOUT: 
	I0327 16:39:19.165464    8497 main.go:141] libmachine: STDERR: 
	I0327 16:39:19.165526    8497 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/offline-docker-189000/disk.qcow2 +20000M
	I0327 16:39:19.177422    8497 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 16:39:19.177447    8497 main.go:141] libmachine: STDERR: 
	I0327 16:39:19.177468    8497 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/offline-docker-189000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/offline-docker-189000/disk.qcow2
	I0327 16:39:19.177475    8497 main.go:141] libmachine: Starting QEMU VM...
	I0327 16:39:19.177508    8497 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/offline-docker-189000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/offline-docker-189000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/offline-docker-189000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:ba:6a:06:90:0d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/offline-docker-189000/disk.qcow2
	I0327 16:39:19.179583    8497 main.go:141] libmachine: STDOUT: 
	I0327 16:39:19.179607    8497 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:39:19.179627    8497 client.go:171] duration metric: took 326.392875ms to LocalClient.Create
	I0327 16:39:21.181678    8497 start.go:128] duration metric: took 2.352365667s to createHost
	I0327 16:39:21.181690    8497 start.go:83] releasing machines lock for "offline-docker-189000", held for 2.352431416s
	W0327 16:39:21.181705    8497 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:39:21.191146    8497 out.go:177] * Deleting "offline-docker-189000" in qemu2 ...
	W0327 16:39:21.199997    8497 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:39:21.200151    8497 start.go:728] Will try again in 5 seconds ...
	I0327 16:39:26.202210    8497 start.go:360] acquireMachinesLock for offline-docker-189000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:39:26.202656    8497 start.go:364] duration metric: took 339.917µs to acquireMachinesLock for "offline-docker-189000"
	I0327 16:39:26.202801    8497 start.go:93] Provisioning new machine with config: &{Name:offline-docker-189000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.29.3 ClusterName:offline-docker-189000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 16:39:26.203058    8497 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 16:39:26.212004    8497 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0327 16:39:26.262952    8497 start.go:159] libmachine.API.Create for "offline-docker-189000" (driver="qemu2")
	I0327 16:39:26.263004    8497 client.go:168] LocalClient.Create starting
	I0327 16:39:26.263123    8497 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem
	I0327 16:39:26.263186    8497 main.go:141] libmachine: Decoding PEM data...
	I0327 16:39:26.263212    8497 main.go:141] libmachine: Parsing certificate...
	I0327 16:39:26.263293    8497 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem
	I0327 16:39:26.263336    8497 main.go:141] libmachine: Decoding PEM data...
	I0327 16:39:26.263350    8497 main.go:141] libmachine: Parsing certificate...
	I0327 16:39:26.263923    8497 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18485-6511/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0327 16:39:26.411465    8497 main.go:141] libmachine: Creating SSH key...
	I0327 16:39:26.479853    8497 main.go:141] libmachine: Creating Disk image...
	I0327 16:39:26.479860    8497 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 16:39:26.480037    8497 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/offline-docker-189000/disk.qcow2.raw /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/offline-docker-189000/disk.qcow2
	I0327 16:39:26.492103    8497 main.go:141] libmachine: STDOUT: 
	I0327 16:39:26.492129    8497 main.go:141] libmachine: STDERR: 
	I0327 16:39:26.492187    8497 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/offline-docker-189000/disk.qcow2 +20000M
	I0327 16:39:26.503013    8497 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 16:39:26.503028    8497 main.go:141] libmachine: STDERR: 
	I0327 16:39:26.503049    8497 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/offline-docker-189000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/offline-docker-189000/disk.qcow2
	I0327 16:39:26.503052    8497 main.go:141] libmachine: Starting QEMU VM...
	I0327 16:39:26.503085    8497 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/offline-docker-189000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/offline-docker-189000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/offline-docker-189000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:66:3f:38:e6:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/offline-docker-189000/disk.qcow2
	I0327 16:39:26.504689    8497 main.go:141] libmachine: STDOUT: 
	I0327 16:39:26.504706    8497 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:39:26.504726    8497 client.go:171] duration metric: took 241.72375ms to LocalClient.Create
	I0327 16:39:28.506842    8497 start.go:128] duration metric: took 2.303821416s to createHost
	I0327 16:39:28.506884    8497 start.go:83] releasing machines lock for "offline-docker-189000", held for 2.304268041s
	W0327 16:39:28.507329    8497 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-189000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-189000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:39:28.519685    8497 out.go:177] 
	W0327 16:39:28.523844    8497 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 16:39:28.523963    8497 out.go:239] * 
	* 
	W0327 16:39:28.526629    8497 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 16:39:28.536629    8497 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-189000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-03-27 16:39:28.553407 -0700 PDT m=+790.036567751
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-189000 -n offline-docker-189000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-189000 -n offline-docker-189000: exit status 7 (67.822625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-189000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-189000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-189000
--- FAIL: TestOffline (10.02s)

                                                
                                    
x
+
TestAddons/Setup (10.31s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-295000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-295000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (10.310604209s)

                                                
                                                
-- stdout --
	* [addons-295000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-295000" primary control-plane node in "addons-295000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-295000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:27:46.779570    7090 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:27:46.779706    7090 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:27:46.779709    7090 out.go:304] Setting ErrFile to fd 2...
	I0327 16:27:46.779716    7090 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:27:46.779845    7090 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:27:46.780889    7090 out.go:298] Setting JSON to false
	I0327 16:27:46.797126    7090 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5237,"bootTime":1711576829,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 16:27:46.797196    7090 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 16:27:46.801883    7090 out.go:177] * [addons-295000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 16:27:46.808830    7090 out.go:177]   - MINIKUBE_LOCATION=18485
	I0327 16:27:46.808884    7090 notify.go:220] Checking for updates...
	I0327 16:27:46.812831    7090 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:27:46.815833    7090 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 16:27:46.818815    7090 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 16:27:46.821809    7090 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	I0327 16:27:46.824845    7090 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 16:27:46.827953    7090 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 16:27:46.831822    7090 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 16:27:46.838730    7090 start.go:297] selected driver: qemu2
	I0327 16:27:46.838735    7090 start.go:901] validating driver "qemu2" against <nil>
	I0327 16:27:46.838740    7090 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 16:27:46.840922    7090 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 16:27:46.843803    7090 out.go:177] * Automatically selected the socket_vmnet network
	I0327 16:27:46.846916    7090 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 16:27:46.846962    7090 cni.go:84] Creating CNI manager for ""
	I0327 16:27:46.846970    7090 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 16:27:46.846974    7090 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 16:27:46.847014    7090 start.go:340] cluster config:
	{Name:addons-295000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-295000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:27:46.851638    7090 iso.go:125] acquiring lock: {Name:mk26a7f41e004242919e6d5076ec0f9645f820ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:27:46.859804    7090 out.go:177] * Starting "addons-295000" primary control-plane node in "addons-295000" cluster
	I0327 16:27:46.863854    7090 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 16:27:46.863872    7090 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 16:27:46.863886    7090 cache.go:56] Caching tarball of preloaded images
	I0327 16:27:46.863948    7090 preload.go:173] Found /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 16:27:46.863954    7090 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 16:27:46.864234    7090 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/addons-295000/config.json ...
	I0327 16:27:46.864249    7090 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/addons-295000/config.json: {Name:mkace165496086975eebd5a665ba0547978ae203 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 16:27:46.864486    7090 start.go:360] acquireMachinesLock for addons-295000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:27:46.864661    7090 start.go:364] duration metric: took 169.333µs to acquireMachinesLock for "addons-295000"
	I0327 16:27:46.864674    7090 start.go:93] Provisioning new machine with config: &{Name:addons-295000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:addons-295000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 16:27:46.864708    7090 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 16:27:46.873794    7090 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0327 16:27:47.109647    7090 start.go:159] libmachine.API.Create for "addons-295000" (driver="qemu2")
	I0327 16:27:47.109686    7090 client.go:168] LocalClient.Create starting
	I0327 16:27:47.109866    7090 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem
	I0327 16:27:47.291006    7090 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem
	I0327 16:27:47.337703    7090 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18485-6511/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0327 16:27:47.567114    7090 main.go:141] libmachine: Creating SSH key...
	I0327 16:27:47.637151    7090 main.go:141] libmachine: Creating Disk image...
	I0327 16:27:47.637161    7090 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 16:27:47.638176    7090 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/addons-295000/disk.qcow2.raw /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/addons-295000/disk.qcow2
	I0327 16:27:47.659543    7090 main.go:141] libmachine: STDOUT: 
	I0327 16:27:47.659568    7090 main.go:141] libmachine: STDERR: 
	I0327 16:27:47.659616    7090 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/addons-295000/disk.qcow2 +20000M
	I0327 16:27:47.670440    7090 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 16:27:47.670467    7090 main.go:141] libmachine: STDERR: 
	I0327 16:27:47.670482    7090 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/addons-295000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/addons-295000/disk.qcow2
	I0327 16:27:47.670494    7090 main.go:141] libmachine: Starting QEMU VM...
	I0327 16:27:47.670537    7090 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/addons-295000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/addons-295000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/addons-295000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:2f:d1:92:2c:a3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/addons-295000/disk.qcow2
	I0327 16:27:47.676555    7090 main.go:141] libmachine: STDOUT: 
	I0327 16:27:47.676573    7090 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:27:47.676594    7090 client.go:171] duration metric: took 566.919917ms to LocalClient.Create
	I0327 16:27:49.678716    7090 start.go:128] duration metric: took 2.814069083s to createHost
	I0327 16:27:49.678804    7090 start.go:83] releasing machines lock for "addons-295000", held for 2.814212916s
	W0327 16:27:49.678911    7090 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:27:49.689226    7090 out.go:177] * Deleting "addons-295000" in qemu2 ...
	W0327 16:27:49.715807    7090 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:27:49.715837    7090 start.go:728] Will try again in 5 seconds ...
	I0327 16:27:54.717226    7090 start.go:360] acquireMachinesLock for addons-295000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:27:54.717721    7090 start.go:364] duration metric: took 415.25µs to acquireMachinesLock for "addons-295000"
	I0327 16:27:54.717897    7090 start.go:93] Provisioning new machine with config: &{Name:addons-295000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:addons-295000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 16:27:54.718196    7090 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 16:27:54.727930    7090 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0327 16:27:54.776921    7090 start.go:159] libmachine.API.Create for "addons-295000" (driver="qemu2")
	I0327 16:27:54.776967    7090 client.go:168] LocalClient.Create starting
	I0327 16:27:54.777079    7090 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem
	I0327 16:27:54.777157    7090 main.go:141] libmachine: Decoding PEM data...
	I0327 16:27:54.777186    7090 main.go:141] libmachine: Parsing certificate...
	I0327 16:27:54.777278    7090 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem
	I0327 16:27:54.777324    7090 main.go:141] libmachine: Decoding PEM data...
	I0327 16:27:54.777346    7090 main.go:141] libmachine: Parsing certificate...
	I0327 16:27:54.777839    7090 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18485-6511/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0327 16:27:54.932962    7090 main.go:141] libmachine: Creating SSH key...
	I0327 16:27:54.991083    7090 main.go:141] libmachine: Creating Disk image...
	I0327 16:27:54.991088    7090 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 16:27:54.991267    7090 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/addons-295000/disk.qcow2.raw /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/addons-295000/disk.qcow2
	I0327 16:27:55.003752    7090 main.go:141] libmachine: STDOUT: 
	I0327 16:27:55.003774    7090 main.go:141] libmachine: STDERR: 
	I0327 16:27:55.003832    7090 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/addons-295000/disk.qcow2 +20000M
	I0327 16:27:55.014508    7090 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 16:27:55.014530    7090 main.go:141] libmachine: STDERR: 
	I0327 16:27:55.014551    7090 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/addons-295000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/addons-295000/disk.qcow2
	I0327 16:27:55.014558    7090 main.go:141] libmachine: Starting QEMU VM...
	I0327 16:27:55.014593    7090 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/addons-295000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/addons-295000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/addons-295000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:46:90:06:dc:83 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/addons-295000/disk.qcow2
	I0327 16:27:55.016312    7090 main.go:141] libmachine: STDOUT: 
	I0327 16:27:55.016328    7090 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:27:55.016343    7090 client.go:171] duration metric: took 239.378625ms to LocalClient.Create
	I0327 16:27:57.018631    7090 start.go:128] duration metric: took 2.300419625s to createHost
	I0327 16:27:57.018717    7090 start.go:83] releasing machines lock for "addons-295000", held for 2.301040291s
	W0327 16:27:57.019117    7090 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p addons-295000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-295000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:27:57.027557    7090 out.go:177] 
	W0327 16:27:57.033684    7090 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 16:27:57.033738    7090 out.go:239] * 
	* 
	W0327 16:27:57.036524    7090 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 16:27:57.044634    7090 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:111: out/minikube-darwin-arm64 start -p addons-295000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (10.31s)

                                                
                                    
x
+
TestCertOptions (10.15s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-772000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-772000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.864358833s)

                                                
                                                
-- stdout --
	* [cert-options-772000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-772000" primary control-plane node in "cert-options-772000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-772000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-772000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-772000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-772000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-772000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (80.698459ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-772000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-772000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-772000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-772000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-772000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-772000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (43.231084ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-772000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-772000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-772000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-772000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-772000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-03-27 16:39:59.125345 -0700 PDT m=+820.609418085
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-772000 -n cert-options-772000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-772000 -n cert-options-772000: exit status 7 (31.900083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-772000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-772000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-772000
--- FAIL: TestCertOptions (10.15s)

                                                
                                    
x
+
TestCertExpiration (195.45s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-052000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-052000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (10.064156416s)

                                                
                                                
-- stdout --
	* [cert-expiration-052000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-052000" primary control-plane node in "cert-expiration-052000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-052000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-052000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-052000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-052000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-052000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.231131709s)

                                                
                                                
-- stdout --
	* [cert-expiration-052000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-052000" primary control-plane node in "cert-expiration-052000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-052000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-052000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-052000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-052000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-052000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-052000" primary control-plane node in "cert-expiration-052000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-052000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-052000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-052000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-03-27 16:42:59.151012 -0700 PDT m=+1000.640456460
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-052000 -n cert-expiration-052000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-052000 -n cert-expiration-052000: exit status 7 (46.250084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-052000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-052000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-052000
--- FAIL: TestCertExpiration (195.45s)

                                                
                                    
x
+
TestDockerFlags (10.23s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-564000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-564000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.971460792s)

                                                
                                                
-- stdout --
	* [docker-flags-564000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-564000" primary control-plane node in "docker-flags-564000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-564000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:39:38.909270    8699 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:39:38.909405    8699 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:39:38.909409    8699 out.go:304] Setting ErrFile to fd 2...
	I0327 16:39:38.909411    8699 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:39:38.909563    8699 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:39:38.910664    8699 out.go:298] Setting JSON to false
	I0327 16:39:38.926731    8699 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5949,"bootTime":1711576829,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 16:39:38.926791    8699 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 16:39:38.931562    8699 out.go:177] * [docker-flags-564000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 16:39:38.937535    8699 out.go:177]   - MINIKUBE_LOCATION=18485
	I0327 16:39:38.937606    8699 notify.go:220] Checking for updates...
	I0327 16:39:38.941517    8699 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:39:38.944478    8699 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 16:39:38.947532    8699 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 16:39:38.950456    8699 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	I0327 16:39:38.953481    8699 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 16:39:38.956917    8699 config.go:182] Loaded profile config "force-systemd-flag-460000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:39:38.956981    8699 config.go:182] Loaded profile config "multinode-266000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:39:38.957027    8699 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 16:39:38.960448    8699 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 16:39:38.967484    8699 start.go:297] selected driver: qemu2
	I0327 16:39:38.967488    8699 start.go:901] validating driver "qemu2" against <nil>
	I0327 16:39:38.967493    8699 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 16:39:38.969700    8699 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 16:39:38.971241    8699 out.go:177] * Automatically selected the socket_vmnet network
	I0327 16:39:38.974562    8699 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0327 16:39:38.974596    8699 cni.go:84] Creating CNI manager for ""
	I0327 16:39:38.974602    8699 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 16:39:38.974606    8699 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 16:39:38.974633    8699 start.go:340] cluster config:
	{Name:docker-flags-564000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:docker-flags-564000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:39:38.979168    8699 iso.go:125] acquiring lock: {Name:mk26a7f41e004242919e6d5076ec0f9645f820ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:39:38.987440    8699 out.go:177] * Starting "docker-flags-564000" primary control-plane node in "docker-flags-564000" cluster
	I0327 16:39:38.991467    8699 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 16:39:38.991483    8699 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 16:39:38.991493    8699 cache.go:56] Caching tarball of preloaded images
	I0327 16:39:38.991545    8699 preload.go:173] Found /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 16:39:38.991551    8699 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 16:39:38.991603    8699 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/docker-flags-564000/config.json ...
	I0327 16:39:38.991613    8699 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/docker-flags-564000/config.json: {Name:mk8907a2f601a4142f095570d89f9a7cc0f32544 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 16:39:38.991844    8699 start.go:360] acquireMachinesLock for docker-flags-564000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:39:38.991883    8699 start.go:364] duration metric: took 26.958µs to acquireMachinesLock for "docker-flags-564000"
	I0327 16:39:38.991896    8699 start.go:93] Provisioning new machine with config: &{Name:docker-flags-564000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:docker-flags-564000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 16:39:38.991934    8699 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 16:39:39.000510    8699 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0327 16:39:39.017521    8699 start.go:159] libmachine.API.Create for "docker-flags-564000" (driver="qemu2")
	I0327 16:39:39.017555    8699 client.go:168] LocalClient.Create starting
	I0327 16:39:39.017611    8699 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem
	I0327 16:39:39.017640    8699 main.go:141] libmachine: Decoding PEM data...
	I0327 16:39:39.017652    8699 main.go:141] libmachine: Parsing certificate...
	I0327 16:39:39.017700    8699 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem
	I0327 16:39:39.017721    8699 main.go:141] libmachine: Decoding PEM data...
	I0327 16:39:39.017728    8699 main.go:141] libmachine: Parsing certificate...
	I0327 16:39:39.018111    8699 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18485-6511/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0327 16:39:39.156614    8699 main.go:141] libmachine: Creating SSH key...
	I0327 16:39:39.289209    8699 main.go:141] libmachine: Creating Disk image...
	I0327 16:39:39.289215    8699 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 16:39:39.289377    8699 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/docker-flags-564000/disk.qcow2.raw /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/docker-flags-564000/disk.qcow2
	I0327 16:39:39.301752    8699 main.go:141] libmachine: STDOUT: 
	I0327 16:39:39.301773    8699 main.go:141] libmachine: STDERR: 
	I0327 16:39:39.301834    8699 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/docker-flags-564000/disk.qcow2 +20000M
	I0327 16:39:39.312500    8699 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 16:39:39.312517    8699 main.go:141] libmachine: STDERR: 
	I0327 16:39:39.312536    8699 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/docker-flags-564000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/docker-flags-564000/disk.qcow2
	I0327 16:39:39.312541    8699 main.go:141] libmachine: Starting QEMU VM...
	I0327 16:39:39.312570    8699 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/docker-flags-564000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/docker-flags-564000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/docker-flags-564000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:ec:d6:5d:26:66 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/docker-flags-564000/disk.qcow2
	I0327 16:39:39.314260    8699 main.go:141] libmachine: STDOUT: 
	I0327 16:39:39.314278    8699 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:39:39.314299    8699 client.go:171] duration metric: took 296.747ms to LocalClient.Create
	I0327 16:39:41.316523    8699 start.go:128] duration metric: took 2.324562291s to createHost
	I0327 16:39:41.316584    8699 start.go:83] releasing machines lock for "docker-flags-564000", held for 2.324761s
	W0327 16:39:41.316654    8699 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:39:41.339814    8699 out.go:177] * Deleting "docker-flags-564000" in qemu2 ...
	W0327 16:39:41.359778    8699 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:39:41.359828    8699 start.go:728] Will try again in 5 seconds ...
	I0327 16:39:46.361847    8699 start.go:360] acquireMachinesLock for docker-flags-564000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:39:46.362169    8699 start.go:364] duration metric: took 232.917µs to acquireMachinesLock for "docker-flags-564000"
	I0327 16:39:46.362271    8699 start.go:93] Provisioning new machine with config: &{Name:docker-flags-564000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:docker-flags-564000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 16:39:46.362474    8699 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 16:39:46.371022    8699 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0327 16:39:46.415184    8699 start.go:159] libmachine.API.Create for "docker-flags-564000" (driver="qemu2")
	I0327 16:39:46.415237    8699 client.go:168] LocalClient.Create starting
	I0327 16:39:46.415336    8699 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem
	I0327 16:39:46.415397    8699 main.go:141] libmachine: Decoding PEM data...
	I0327 16:39:46.415421    8699 main.go:141] libmachine: Parsing certificate...
	I0327 16:39:46.415489    8699 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem
	I0327 16:39:46.415530    8699 main.go:141] libmachine: Decoding PEM data...
	I0327 16:39:46.415552    8699 main.go:141] libmachine: Parsing certificate...
	I0327 16:39:46.416057    8699 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18485-6511/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0327 16:39:46.561469    8699 main.go:141] libmachine: Creating SSH key...
	I0327 16:39:46.780792    8699 main.go:141] libmachine: Creating Disk image...
	I0327 16:39:46.780804    8699 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 16:39:46.781011    8699 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/docker-flags-564000/disk.qcow2.raw /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/docker-flags-564000/disk.qcow2
	I0327 16:39:46.793899    8699 main.go:141] libmachine: STDOUT: 
	I0327 16:39:46.793915    8699 main.go:141] libmachine: STDERR: 
	I0327 16:39:46.793963    8699 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/docker-flags-564000/disk.qcow2 +20000M
	I0327 16:39:46.804614    8699 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 16:39:46.804640    8699 main.go:141] libmachine: STDERR: 
	I0327 16:39:46.804653    8699 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/docker-flags-564000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/docker-flags-564000/disk.qcow2
	I0327 16:39:46.804657    8699 main.go:141] libmachine: Starting QEMU VM...
	I0327 16:39:46.804704    8699 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/docker-flags-564000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/docker-flags-564000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/docker-flags-564000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:6b:58:cc:8f:a3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/docker-flags-564000/disk.qcow2
	I0327 16:39:46.806492    8699 main.go:141] libmachine: STDOUT: 
	I0327 16:39:46.806507    8699 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:39:46.806521    8699 client.go:171] duration metric: took 391.290375ms to LocalClient.Create
	I0327 16:39:48.808636    8699 start.go:128] duration metric: took 2.44620425s to createHost
	I0327 16:39:48.808695    8699 start.go:83] releasing machines lock for "docker-flags-564000", held for 2.446579667s
	W0327 16:39:48.809124    8699 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-564000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-564000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:39:48.818757    8699 out.go:177] 
	W0327 16:39:48.823795    8699 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 16:39:48.823821    8699 out.go:239] * 
	* 
	W0327 16:39:48.826370    8699 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 16:39:48.834728    8699 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-564000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-564000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-564000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (76.943375ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-564000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-564000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-564000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-564000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-564000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-564000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-564000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-564000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-564000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (45.585167ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-564000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-564000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-564000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-564000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-564000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-564000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-03-27 16:39:48.975717 -0700 PDT m=+810.459486918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-564000 -n docker-flags-564000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-564000 -n docker-flags-564000: exit status 7 (31.15125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-564000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-564000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-564000
--- FAIL: TestDockerFlags (10.23s)

                                                
                                    
x
+
TestForceSystemdFlag (10.37s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-460000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-460000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (10.153354917s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-460000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-460000" primary control-plane node in "force-systemd-flag-460000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-460000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:39:33.517111    8671 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:39:33.517238    8671 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:39:33.517241    8671 out.go:304] Setting ErrFile to fd 2...
	I0327 16:39:33.517245    8671 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:39:33.517367    8671 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:39:33.518467    8671 out.go:298] Setting JSON to false
	I0327 16:39:33.534428    8671 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5944,"bootTime":1711576829,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 16:39:33.534488    8671 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 16:39:33.539419    8671 out.go:177] * [force-systemd-flag-460000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 16:39:33.547388    8671 out.go:177]   - MINIKUBE_LOCATION=18485
	I0327 16:39:33.551353    8671 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:39:33.547476    8671 notify.go:220] Checking for updates...
	I0327 16:39:33.557374    8671 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 16:39:33.558914    8671 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 16:39:33.562348    8671 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	I0327 16:39:33.565353    8671 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 16:39:33.568705    8671 config.go:182] Loaded profile config "force-systemd-env-565000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:39:33.568770    8671 config.go:182] Loaded profile config "multinode-266000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:39:33.568816    8671 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 16:39:33.573330    8671 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 16:39:33.580387    8671 start.go:297] selected driver: qemu2
	I0327 16:39:33.580392    8671 start.go:901] validating driver "qemu2" against <nil>
	I0327 16:39:33.580397    8671 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 16:39:33.582647    8671 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 16:39:33.585297    8671 out.go:177] * Automatically selected the socket_vmnet network
	I0327 16:39:33.588458    8671 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0327 16:39:33.588495    8671 cni.go:84] Creating CNI manager for ""
	I0327 16:39:33.588507    8671 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 16:39:33.588513    8671 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 16:39:33.588538    8671 start.go:340] cluster config:
	{Name:force-systemd-flag-460000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-flag-460000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:39:33.592877    8671 iso.go:125] acquiring lock: {Name:mk26a7f41e004242919e6d5076ec0f9645f820ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:39:33.600345    8671 out.go:177] * Starting "force-systemd-flag-460000" primary control-plane node in "force-systemd-flag-460000" cluster
	I0327 16:39:33.604377    8671 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 16:39:33.604391    8671 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 16:39:33.604395    8671 cache.go:56] Caching tarball of preloaded images
	I0327 16:39:33.604456    8671 preload.go:173] Found /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 16:39:33.604462    8671 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 16:39:33.604520    8671 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/force-systemd-flag-460000/config.json ...
	I0327 16:39:33.604531    8671 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/force-systemd-flag-460000/config.json: {Name:mk26ae72290848b466f1bf13d2ce67c7e8ef4fcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 16:39:33.604754    8671 start.go:360] acquireMachinesLock for force-systemd-flag-460000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:39:33.604790    8671 start.go:364] duration metric: took 27.416µs to acquireMachinesLock for "force-systemd-flag-460000"
	I0327 16:39:33.604804    8671 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-460000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-flag-460000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 16:39:33.604831    8671 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 16:39:33.613376    8671 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0327 16:39:33.630952    8671 start.go:159] libmachine.API.Create for "force-systemd-flag-460000" (driver="qemu2")
	I0327 16:39:33.630983    8671 client.go:168] LocalClient.Create starting
	I0327 16:39:33.631060    8671 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem
	I0327 16:39:33.631089    8671 main.go:141] libmachine: Decoding PEM data...
	I0327 16:39:33.631099    8671 main.go:141] libmachine: Parsing certificate...
	I0327 16:39:33.631143    8671 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem
	I0327 16:39:33.631168    8671 main.go:141] libmachine: Decoding PEM data...
	I0327 16:39:33.631174    8671 main.go:141] libmachine: Parsing certificate...
	I0327 16:39:33.631534    8671 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18485-6511/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0327 16:39:33.790966    8671 main.go:141] libmachine: Creating SSH key...
	I0327 16:39:33.942263    8671 main.go:141] libmachine: Creating Disk image...
	I0327 16:39:33.942270    8671 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 16:39:33.942473    8671 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/force-systemd-flag-460000/disk.qcow2.raw /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/force-systemd-flag-460000/disk.qcow2
	I0327 16:39:33.954916    8671 main.go:141] libmachine: STDOUT: 
	I0327 16:39:33.954936    8671 main.go:141] libmachine: STDERR: 
	I0327 16:39:33.954991    8671 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/force-systemd-flag-460000/disk.qcow2 +20000M
	I0327 16:39:33.965874    8671 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 16:39:33.965889    8671 main.go:141] libmachine: STDERR: 
	I0327 16:39:33.965902    8671 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/force-systemd-flag-460000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/force-systemd-flag-460000/disk.qcow2
	I0327 16:39:33.965906    8671 main.go:141] libmachine: Starting QEMU VM...
	I0327 16:39:33.965958    8671 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/force-systemd-flag-460000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/force-systemd-flag-460000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/force-systemd-flag-460000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:a2:79:49:03:7c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/force-systemd-flag-460000/disk.qcow2
	I0327 16:39:33.967687    8671 main.go:141] libmachine: STDOUT: 
	I0327 16:39:33.967702    8671 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:39:33.967732    8671 client.go:171] duration metric: took 336.744791ms to LocalClient.Create
	I0327 16:39:35.969783    8671 start.go:128] duration metric: took 2.364997958s to createHost
	I0327 16:39:35.969851    8671 start.go:83] releasing machines lock for "force-systemd-flag-460000", held for 2.365121791s
	W0327 16:39:35.969904    8671 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:39:35.981824    8671 out.go:177] * Deleting "force-systemd-flag-460000" in qemu2 ...
	W0327 16:39:36.010099    8671 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:39:36.010124    8671 start.go:728] Will try again in 5 seconds ...
	I0327 16:39:41.012190    8671 start.go:360] acquireMachinesLock for force-systemd-flag-460000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:39:41.316699    8671 start.go:364] duration metric: took 304.378917ms to acquireMachinesLock for "force-systemd-flag-460000"
	I0327 16:39:41.316879    8671 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-460000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-flag-460000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 16:39:41.317110    8671 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 16:39:41.330567    8671 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0327 16:39:41.379752    8671 start.go:159] libmachine.API.Create for "force-systemd-flag-460000" (driver="qemu2")
	I0327 16:39:41.379806    8671 client.go:168] LocalClient.Create starting
	I0327 16:39:41.379994    8671 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem
	I0327 16:39:41.380060    8671 main.go:141] libmachine: Decoding PEM data...
	I0327 16:39:41.380079    8671 main.go:141] libmachine: Parsing certificate...
	I0327 16:39:41.380146    8671 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem
	I0327 16:39:41.380199    8671 main.go:141] libmachine: Decoding PEM data...
	I0327 16:39:41.380210    8671 main.go:141] libmachine: Parsing certificate...
	I0327 16:39:41.380813    8671 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18485-6511/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0327 16:39:41.537164    8671 main.go:141] libmachine: Creating SSH key...
	I0327 16:39:41.568403    8671 main.go:141] libmachine: Creating Disk image...
	I0327 16:39:41.568407    8671 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 16:39:41.568568    8671 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/force-systemd-flag-460000/disk.qcow2.raw /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/force-systemd-flag-460000/disk.qcow2
	I0327 16:39:41.580708    8671 main.go:141] libmachine: STDOUT: 
	I0327 16:39:41.580728    8671 main.go:141] libmachine: STDERR: 
	I0327 16:39:41.580794    8671 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/force-systemd-flag-460000/disk.qcow2 +20000M
	I0327 16:39:41.591564    8671 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 16:39:41.591585    8671 main.go:141] libmachine: STDERR: 
	I0327 16:39:41.591600    8671 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/force-systemd-flag-460000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/force-systemd-flag-460000/disk.qcow2
	I0327 16:39:41.591614    8671 main.go:141] libmachine: Starting QEMU VM...
	I0327 16:39:41.591645    8671 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/force-systemd-flag-460000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/force-systemd-flag-460000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/force-systemd-flag-460000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:33:d1:37:bb:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/force-systemd-flag-460000/disk.qcow2
	I0327 16:39:41.593454    8671 main.go:141] libmachine: STDOUT: 
	I0327 16:39:41.593469    8671 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:39:41.593485    8671 client.go:171] duration metric: took 213.676208ms to LocalClient.Create
	I0327 16:39:43.595728    8671 start.go:128] duration metric: took 2.278637625s to createHost
	I0327 16:39:43.595829    8671 start.go:83] releasing machines lock for "force-systemd-flag-460000", held for 2.279171709s
	W0327 16:39:43.596152    8671 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-460000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-460000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:39:43.611177    8671 out.go:177] 
	W0327 16:39:43.615838    8671 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 16:39:43.615871    8671 out.go:239] * 
	* 
	W0327 16:39:43.617933    8671 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 16:39:43.626722    8671 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-460000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-460000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-460000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (78.378375ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-460000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-460000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-460000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-03-27 16:39:43.722099 -0700 PDT m=+805.205712876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-460000 -n force-systemd-flag-460000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-460000 -n force-systemd-flag-460000: exit status 7 (33.945709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-460000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-460000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-460000
--- FAIL: TestForceSystemdFlag (10.37s)

                                                
                                    
x
+
TestForceSystemdEnv (10.15s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-565000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-565000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.938125042s)

                                                
                                                
-- stdout --
	* [force-systemd-env-565000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-565000" primary control-plane node in "force-systemd-env-565000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-565000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:39:28.758826    8639 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:39:28.759059    8639 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:39:28.759063    8639 out.go:304] Setting ErrFile to fd 2...
	I0327 16:39:28.759065    8639 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:39:28.759337    8639 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:39:28.760672    8639 out.go:298] Setting JSON to false
	I0327 16:39:28.777545    8639 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5939,"bootTime":1711576829,"procs":475,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 16:39:28.777609    8639 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 16:39:28.783591    8639 out.go:177] * [force-systemd-env-565000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 16:39:28.793587    8639 out.go:177]   - MINIKUBE_LOCATION=18485
	I0327 16:39:28.789628    8639 notify.go:220] Checking for updates...
	I0327 16:39:28.800557    8639 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:39:28.810613    8639 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 16:39:28.822461    8639 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 16:39:28.830598    8639 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	I0327 16:39:28.837577    8639 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0327 16:39:28.841971    8639 config.go:182] Loaded profile config "multinode-266000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:39:28.842020    8639 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 16:39:28.845563    8639 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 16:39:28.852606    8639 start.go:297] selected driver: qemu2
	I0327 16:39:28.852611    8639 start.go:901] validating driver "qemu2" against <nil>
	I0327 16:39:28.852617    8639 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 16:39:28.855141    8639 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 16:39:28.859590    8639 out.go:177] * Automatically selected the socket_vmnet network
	I0327 16:39:28.862683    8639 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0327 16:39:28.862729    8639 cni.go:84] Creating CNI manager for ""
	I0327 16:39:28.862737    8639 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 16:39:28.862745    8639 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 16:39:28.862771    8639 start.go:340] cluster config:
	{Name:force-systemd-env-565000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-env-565000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:39:28.867429    8639 iso.go:125] acquiring lock: {Name:mk26a7f41e004242919e6d5076ec0f9645f820ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:39:28.874624    8639 out.go:177] * Starting "force-systemd-env-565000" primary control-plane node in "force-systemd-env-565000" cluster
	I0327 16:39:28.878631    8639 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 16:39:28.878664    8639 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 16:39:28.878674    8639 cache.go:56] Caching tarball of preloaded images
	I0327 16:39:28.878757    8639 preload.go:173] Found /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 16:39:28.878763    8639 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 16:39:28.878828    8639 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/force-systemd-env-565000/config.json ...
	I0327 16:39:28.878841    8639 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/force-systemd-env-565000/config.json: {Name:mkbf32739d2402421801b97ea41ccce9575b3fdd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 16:39:28.879062    8639 start.go:360] acquireMachinesLock for force-systemd-env-565000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:39:28.879093    8639 start.go:364] duration metric: took 23.25µs to acquireMachinesLock for "force-systemd-env-565000"
	I0327 16:39:28.879105    8639 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-565000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-env-565000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 16:39:28.879135    8639 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 16:39:28.886628    8639 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0327 16:39:28.901026    8639 start.go:159] libmachine.API.Create for "force-systemd-env-565000" (driver="qemu2")
	I0327 16:39:28.901053    8639 client.go:168] LocalClient.Create starting
	I0327 16:39:28.901116    8639 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem
	I0327 16:39:28.901144    8639 main.go:141] libmachine: Decoding PEM data...
	I0327 16:39:28.901155    8639 main.go:141] libmachine: Parsing certificate...
	I0327 16:39:28.901211    8639 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem
	I0327 16:39:28.901232    8639 main.go:141] libmachine: Decoding PEM data...
	I0327 16:39:28.901238    8639 main.go:141] libmachine: Parsing certificate...
	I0327 16:39:28.901612    8639 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18485-6511/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0327 16:39:29.041075    8639 main.go:141] libmachine: Creating SSH key...
	I0327 16:39:29.181394    8639 main.go:141] libmachine: Creating Disk image...
	I0327 16:39:29.181403    8639 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 16:39:29.181608    8639 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/force-systemd-env-565000/disk.qcow2.raw /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/force-systemd-env-565000/disk.qcow2
	I0327 16:39:29.194460    8639 main.go:141] libmachine: STDOUT: 
	I0327 16:39:29.194476    8639 main.go:141] libmachine: STDERR: 
	I0327 16:39:29.194536    8639 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/force-systemd-env-565000/disk.qcow2 +20000M
	I0327 16:39:29.205649    8639 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 16:39:29.205665    8639 main.go:141] libmachine: STDERR: 
	I0327 16:39:29.205678    8639 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/force-systemd-env-565000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/force-systemd-env-565000/disk.qcow2
	I0327 16:39:29.205700    8639 main.go:141] libmachine: Starting QEMU VM...
	I0327 16:39:29.205735    8639 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/force-systemd-env-565000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/force-systemd-env-565000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/force-systemd-env-565000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:57:8b:ed:01:15 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/force-systemd-env-565000/disk.qcow2
	I0327 16:39:29.207576    8639 main.go:141] libmachine: STDOUT: 
	I0327 16:39:29.207596    8639 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:39:29.207617    8639 client.go:171] duration metric: took 306.568833ms to LocalClient.Create
	I0327 16:39:31.209854    8639 start.go:128] duration metric: took 2.330759167s to createHost
	I0327 16:39:31.209937    8639 start.go:83] releasing machines lock for "force-systemd-env-565000", held for 2.330904084s
	W0327 16:39:31.210001    8639 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:39:31.221102    8639 out.go:177] * Deleting "force-systemd-env-565000" in qemu2 ...
	W0327 16:39:31.243531    8639 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:39:31.243562    8639 start.go:728] Will try again in 5 seconds ...
	I0327 16:39:36.245613    8639 start.go:360] acquireMachinesLock for force-systemd-env-565000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:39:36.245863    8639 start.go:364] duration metric: took 175.625µs to acquireMachinesLock for "force-systemd-env-565000"
	I0327 16:39:36.245961    8639 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-565000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-env-565000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 16:39:36.246124    8639 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 16:39:36.253625    8639 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0327 16:39:36.298776    8639 start.go:159] libmachine.API.Create for "force-systemd-env-565000" (driver="qemu2")
	I0327 16:39:36.298836    8639 client.go:168] LocalClient.Create starting
	I0327 16:39:36.298957    8639 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem
	I0327 16:39:36.299026    8639 main.go:141] libmachine: Decoding PEM data...
	I0327 16:39:36.299045    8639 main.go:141] libmachine: Parsing certificate...
	I0327 16:39:36.299145    8639 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem
	I0327 16:39:36.299195    8639 main.go:141] libmachine: Decoding PEM data...
	I0327 16:39:36.299217    8639 main.go:141] libmachine: Parsing certificate...
	I0327 16:39:36.300402    8639 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18485-6511/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0327 16:39:36.455953    8639 main.go:141] libmachine: Creating SSH key...
	I0327 16:39:36.596161    8639 main.go:141] libmachine: Creating Disk image...
	I0327 16:39:36.596167    8639 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 16:39:36.596353    8639 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/force-systemd-env-565000/disk.qcow2.raw /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/force-systemd-env-565000/disk.qcow2
	I0327 16:39:36.609036    8639 main.go:141] libmachine: STDOUT: 
	I0327 16:39:36.609056    8639 main.go:141] libmachine: STDERR: 
	I0327 16:39:36.609124    8639 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/force-systemd-env-565000/disk.qcow2 +20000M
	I0327 16:39:36.619814    8639 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 16:39:36.619828    8639 main.go:141] libmachine: STDERR: 
	I0327 16:39:36.619841    8639 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/force-systemd-env-565000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/force-systemd-env-565000/disk.qcow2
	I0327 16:39:36.619846    8639 main.go:141] libmachine: Starting QEMU VM...
	I0327 16:39:36.619894    8639 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/force-systemd-env-565000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/force-systemd-env-565000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/force-systemd-env-565000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:4f:8c:87:f7:2b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/force-systemd-env-565000/disk.qcow2
	I0327 16:39:36.621719    8639 main.go:141] libmachine: STDOUT: 
	I0327 16:39:36.621735    8639 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:39:36.621747    8639 client.go:171] duration metric: took 322.912958ms to LocalClient.Create
	I0327 16:39:38.623858    8639 start.go:128] duration metric: took 2.377776s to createHost
	I0327 16:39:38.623908    8639 start.go:83] releasing machines lock for "force-systemd-env-565000", held for 2.378099625s
	W0327 16:39:38.624271    8639 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-565000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-565000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:39:38.636187    8639 out.go:177] 
	W0327 16:39:38.640224    8639 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 16:39:38.640268    8639 out.go:239] * 
	* 
	W0327 16:39:38.643051    8639 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 16:39:38.650172    8639 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-565000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-565000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-565000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (78.197583ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-565000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-565000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-565000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-03-27 16:39:38.746767 -0700 PDT m=+800.230232001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-565000 -n force-systemd-env-565000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-565000 -n force-systemd-env-565000: exit status 7 (34.486459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-565000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-565000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-565000
--- FAIL: TestForceSystemdEnv (10.15s)

                                                
                                    
x
+
TestErrorSpam/setup (9.79s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-432000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-432000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000 --driver=qemu2 : exit status 80 (9.785869958s)

                                                
                                                
-- stdout --
	* [nospam-432000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-432000" primary control-plane node in "nospam-432000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-432000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-432000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-432000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-432000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-432000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
- MINIKUBE_LOCATION=18485
- KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-432000" primary control-plane node in "nospam-432000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-432000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-432000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.79s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (9.88s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-746000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-746000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.802879458s)

                                                
                                                
-- stdout --
	* [functional-746000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-746000" primary control-plane node in "functional-746000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-746000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51012 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51012 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51012 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-746000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2232: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-746000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2237: start stdout=* [functional-746000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
- MINIKUBE_LOCATION=18485
- KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-746000" primary control-plane node in "functional-746000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-746000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2242: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:51012 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:51012 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:51012 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-746000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-746000 -n functional-746000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-746000 -n functional-746000: exit status 7 (69.450083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-746000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (9.88s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-746000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-746000 --alsologtostderr -v=8: exit status 80 (5.187034333s)

                                                
                                                
-- stdout --
	* [functional-746000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-746000" primary control-plane node in "functional-746000" cluster
	* Restarting existing qemu2 VM for "functional-746000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-746000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:28:25.276770    7223 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:28:25.276919    7223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:28:25.276922    7223 out.go:304] Setting ErrFile to fd 2...
	I0327 16:28:25.276924    7223 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:28:25.277055    7223 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:28:25.277966    7223 out.go:298] Setting JSON to false
	I0327 16:28:25.294074    7223 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5276,"bootTime":1711576829,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 16:28:25.294135    7223 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 16:28:25.298037    7223 out.go:177] * [functional-746000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 16:28:25.304858    7223 out.go:177]   - MINIKUBE_LOCATION=18485
	I0327 16:28:25.304938    7223 notify.go:220] Checking for updates...
	I0327 16:28:25.311790    7223 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:28:25.314895    7223 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 16:28:25.317934    7223 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 16:28:25.319345    7223 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	I0327 16:28:25.322878    7223 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 16:28:25.326223    7223 config.go:182] Loaded profile config "functional-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:28:25.326275    7223 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 16:28:25.330764    7223 out.go:177] * Using the qemu2 driver based on existing profile
	I0327 16:28:25.337879    7223 start.go:297] selected driver: qemu2
	I0327 16:28:25.337886    7223 start.go:901] validating driver "qemu2" against &{Name:functional-746000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:functional-746000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:28:25.337948    7223 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 16:28:25.340202    7223 cni.go:84] Creating CNI manager for ""
	I0327 16:28:25.340218    7223 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 16:28:25.340259    7223 start.go:340] cluster config:
	{Name:functional-746000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-746000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:28:25.344554    7223 iso.go:125] acquiring lock: {Name:mk26a7f41e004242919e6d5076ec0f9645f820ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:28:25.352887    7223 out.go:177] * Starting "functional-746000" primary control-plane node in "functional-746000" cluster
	I0327 16:28:25.356893    7223 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 16:28:25.356913    7223 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 16:28:25.356922    7223 cache.go:56] Caching tarball of preloaded images
	I0327 16:28:25.356973    7223 preload.go:173] Found /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 16:28:25.356978    7223 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 16:28:25.357037    7223 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/functional-746000/config.json ...
	I0327 16:28:25.357528    7223 start.go:360] acquireMachinesLock for functional-746000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:28:25.357556    7223 start.go:364] duration metric: took 21.708µs to acquireMachinesLock for "functional-746000"
	I0327 16:28:25.357565    7223 start.go:96] Skipping create...Using existing machine configuration
	I0327 16:28:25.357570    7223 fix.go:54] fixHost starting: 
	I0327 16:28:25.357687    7223 fix.go:112] recreateIfNeeded on functional-746000: state=Stopped err=<nil>
	W0327 16:28:25.357695    7223 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 16:28:25.364918    7223 out.go:177] * Restarting existing qemu2 VM for "functional-746000" ...
	I0327 16:28:25.367900    7223 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/functional-746000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/functional-746000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/functional-746000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:ba:a4:21:63:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/functional-746000/disk.qcow2
	I0327 16:28:25.369753    7223 main.go:141] libmachine: STDOUT: 
	I0327 16:28:25.369778    7223 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:28:25.369809    7223 fix.go:56] duration metric: took 12.239792ms for fixHost
	I0327 16:28:25.369813    7223 start.go:83] releasing machines lock for "functional-746000", held for 12.253708ms
	W0327 16:28:25.369821    7223 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 16:28:25.369859    7223 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:28:25.369864    7223 start.go:728] Will try again in 5 seconds ...
	I0327 16:28:30.371819    7223 start.go:360] acquireMachinesLock for functional-746000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:28:30.372131    7223 start.go:364] duration metric: took 257.125µs to acquireMachinesLock for "functional-746000"
	I0327 16:28:30.372262    7223 start.go:96] Skipping create...Using existing machine configuration
	I0327 16:28:30.372279    7223 fix.go:54] fixHost starting: 
	I0327 16:28:30.372990    7223 fix.go:112] recreateIfNeeded on functional-746000: state=Stopped err=<nil>
	W0327 16:28:30.373015    7223 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 16:28:30.377584    7223 out.go:177] * Restarting existing qemu2 VM for "functional-746000" ...
	I0327 16:28:30.385566    7223 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/functional-746000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/functional-746000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/functional-746000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:ba:a4:21:63:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/functional-746000/disk.qcow2
	I0327 16:28:30.395133    7223 main.go:141] libmachine: STDOUT: 
	I0327 16:28:30.395201    7223 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:28:30.395273    7223 fix.go:56] duration metric: took 22.995459ms for fixHost
	I0327 16:28:30.395288    7223 start.go:83] releasing machines lock for "functional-746000", held for 23.133542ms
	W0327 16:28:30.395450    7223 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-746000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-746000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:28:30.403381    7223 out.go:177] 
	W0327 16:28:30.407380    7223 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 16:28:30.407415    7223 out.go:239] * 
	* 
	W0327 16:28:30.410310    7223 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 16:28:30.417355    7223 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-746000 --alsologtostderr -v=8": exit status 80
functional_test.go:659: soft start took 5.188658459s for "functional-746000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-746000 -n functional-746000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-746000 -n functional-746000: exit status 7 (70.588459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-746000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.26s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
functional_test.go:677: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (31.592166ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:679: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:683: expected current-context = "functional-746000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-746000 -n functional-746000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-746000 -n functional-746000: exit status 7 (32.1455ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-746000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-746000 get po -A
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-746000 get po -A: exit status 1 (27.426875ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-746000

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-746000 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-746000\n"*: args "kubectl --context functional-746000 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-746000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-746000 -n functional-746000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-746000 -n functional-746000: exit status 7 (32.312458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-746000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 ssh sudo crictl images: exit status 83 (42.8895ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
functional_test.go:1122: failed to get images by "out/minikube-darwin-arm64 -p functional-746000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1126: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (39.857666ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
functional_test.go:1146: failed to manually delete image "out/minikube-darwin-arm64 -p functional-746000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (40.992667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (41.8915ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
functional_test.go:1161: expected "out/minikube-darwin-arm64 -p functional-746000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.69s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 kubectl -- --context functional-746000 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 kubectl -- --context functional-746000 get pods: exit status 1 (657.0975ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-746000
	* no server found for cluster "functional-746000"

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-darwin-arm64 -p functional-746000 kubectl -- --context functional-746000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-746000 -n functional-746000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-746000 -n functional-746000: exit status 7 (33.839333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-746000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.69s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.77s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-746000 get pods
functional_test.go:737: (dbg) Non-zero exit: out/kubectl --context functional-746000 get pods: exit status 1 (899.721041ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-746000
	* no server found for cluster "functional-746000"

                                                
                                                
** /stderr **
functional_test.go:740: failed to run kubectl directly. args "out/kubectl --context functional-746000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-746000 -n functional-746000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-746000 -n functional-746000: exit status 7 (867.292709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-746000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.77s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-746000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-746000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.190988667s)

                                                
                                                
-- stdout --
	* [functional-746000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-746000" primary control-plane node in "functional-746000" cluster
	* Restarting existing qemu2 VM for "functional-746000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-746000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-746000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-746000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:757: restart took 5.191650167s for "functional-746000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-746000 -n functional-746000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-746000 -n functional-746000: exit status 7 (68.419417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-746000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.26s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-746000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-746000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (30.737292ms)

                                                
                                                
** stderr ** 
	error: context "functional-746000" does not exist

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-746000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-746000 -n functional-746000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-746000 -n functional-746000: exit status 7 (32.17075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-746000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 logs
functional_test.go:1232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 logs: exit status 83 (79.661375ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-614000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:26 PDT |                     |
	|         | -p download-only-614000                                                  |                      |         |                |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |                |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:26 PDT | 27 Mar 24 16:26 PDT |
	| delete  | -p download-only-614000                                                  | download-only-614000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:26 PDT | 27 Mar 24 16:26 PDT |
	| start   | -o=json --download-only                                                  | download-only-652000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:26 PDT |                     |
	|         | -p download-only-652000                                                  |                      |         |                |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3                                             |                      |         |                |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:27 PDT | 27 Mar 24 16:27 PDT |
	| delete  | -p download-only-652000                                                  | download-only-652000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:27 PDT | 27 Mar 24 16:27 PDT |
	| start   | -o=json --download-only                                                  | download-only-236000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:27 PDT |                     |
	|         | -p download-only-236000                                                  |                      |         |                |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                                      |                      |         |                |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:27 PDT | 27 Mar 24 16:27 PDT |
	| delete  | -p download-only-236000                                                  | download-only-236000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:27 PDT | 27 Mar 24 16:27 PDT |
	| delete  | -p download-only-614000                                                  | download-only-614000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:27 PDT | 27 Mar 24 16:27 PDT |
	| delete  | -p download-only-652000                                                  | download-only-652000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:27 PDT | 27 Mar 24 16:27 PDT |
	| delete  | -p download-only-236000                                                  | download-only-236000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:27 PDT | 27 Mar 24 16:27 PDT |
	| start   | --download-only -p                                                       | binary-mirror-029000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:27 PDT |                     |
	|         | binary-mirror-029000                                                     |                      |         |                |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |                |                     |                     |
	|         | --binary-mirror                                                          |                      |         |                |                     |                     |
	|         | http://127.0.0.1:50984                                                   |                      |         |                |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
	| delete  | -p binary-mirror-029000                                                  | binary-mirror-029000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:27 PDT | 27 Mar 24 16:27 PDT |
	| addons  | enable dashboard -p                                                      | addons-295000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:27 PDT |                     |
	|         | addons-295000                                                            |                      |         |                |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-295000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:27 PDT |                     |
	|         | addons-295000                                                            |                      |         |                |                     |                     |
	| start   | -p addons-295000 --wait=true                                             | addons-295000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:27 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |                |                     |                     |
	|         | --addons=registry                                                        |                      |         |                |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |                |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |                |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |                |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |                |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |                |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |                |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |                |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |                |                     |                     |
	|         | --addons=yakd --driver=qemu2                                             |                      |         |                |                     |                     |
	|         |  --addons=ingress                                                        |                      |         |                |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |                |                     |                     |
	| delete  | -p addons-295000                                                         | addons-295000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:27 PDT | 27 Mar 24 16:27 PDT |
	| start   | -p nospam-432000 -n=1 --memory=2250 --wait=false                         | nospam-432000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:27 PDT |                     |
	|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000 |                      |         |                |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
	| start   | nospam-432000 --log_dir                                                  | nospam-432000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000           |                      |         |                |                     |                     |
	|         | start --dry-run                                                          |                      |         |                |                     |                     |
	| start   | nospam-432000 --log_dir                                                  | nospam-432000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000           |                      |         |                |                     |                     |
	|         | start --dry-run                                                          |                      |         |                |                     |                     |
	| start   | nospam-432000 --log_dir                                                  | nospam-432000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000           |                      |         |                |                     |                     |
	|         | start --dry-run                                                          |                      |         |                |                     |                     |
	| pause   | nospam-432000 --log_dir                                                  | nospam-432000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000           |                      |         |                |                     |                     |
	|         | pause                                                                    |                      |         |                |                     |                     |
	| pause   | nospam-432000 --log_dir                                                  | nospam-432000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000           |                      |         |                |                     |                     |
	|         | pause                                                                    |                      |         |                |                     |                     |
	| pause   | nospam-432000 --log_dir                                                  | nospam-432000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000           |                      |         |                |                     |                     |
	|         | pause                                                                    |                      |         |                |                     |                     |
	| unpause | nospam-432000 --log_dir                                                  | nospam-432000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000           |                      |         |                |                     |                     |
	|         | unpause                                                                  |                      |         |                |                     |                     |
	| unpause | nospam-432000 --log_dir                                                  | nospam-432000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000           |                      |         |                |                     |                     |
	|         | unpause                                                                  |                      |         |                |                     |                     |
	| unpause | nospam-432000 --log_dir                                                  | nospam-432000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000           |                      |         |                |                     |                     |
	|         | unpause                                                                  |                      |         |                |                     |                     |
	| stop    | nospam-432000 --log_dir                                                  | nospam-432000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT | 27 Mar 24 16:28 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000           |                      |         |                |                     |                     |
	|         | stop                                                                     |                      |         |                |                     |                     |
	| stop    | nospam-432000 --log_dir                                                  | nospam-432000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT | 27 Mar 24 16:28 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000           |                      |         |                |                     |                     |
	|         | stop                                                                     |                      |         |                |                     |                     |
	| stop    | nospam-432000 --log_dir                                                  | nospam-432000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT | 27 Mar 24 16:28 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000           |                      |         |                |                     |                     |
	|         | stop                                                                     |                      |         |                |                     |                     |
	| delete  | -p nospam-432000                                                         | nospam-432000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT | 27 Mar 24 16:28 PDT |
	| start   | -p functional-746000                                                     | functional-746000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
	|         | --memory=4000                                                            |                      |         |                |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |                |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |                |                     |                     |
	| start   | -p functional-746000                                                     | functional-746000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |                |                     |                     |
	| cache   | functional-746000 cache add                                              | functional-746000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT | 27 Mar 24 16:28 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |                |                     |                     |
	| cache   | functional-746000 cache add                                              | functional-746000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT | 27 Mar 24 16:28 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |                |                     |                     |
	| cache   | functional-746000 cache add                                              | functional-746000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT | 27 Mar 24 16:28 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
	| cache   | functional-746000 cache add                                              | functional-746000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT | 27 Mar 24 16:28 PDT |
	|         | minikube-local-cache-test:functional-746000                              |                      |         |                |                     |                     |
	| cache   | functional-746000 cache delete                                           | functional-746000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT | 27 Mar 24 16:28 PDT |
	|         | minikube-local-cache-test:functional-746000                              |                      |         |                |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT | 27 Mar 24 16:28 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |                |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT | 27 Mar 24 16:28 PDT |
	| ssh     | functional-746000 ssh sudo                                               | functional-746000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
	|         | crictl images                                                            |                      |         |                |                     |                     |
	| ssh     | functional-746000                                                        | functional-746000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
	| ssh     | functional-746000 ssh                                                    | functional-746000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
	| cache   | functional-746000 cache reload                                           | functional-746000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT | 27 Mar 24 16:28 PDT |
	| ssh     | functional-746000 ssh                                                    | functional-746000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |                |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT | 27 Mar 24 16:28 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |                |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT | 27 Mar 24 16:28 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
	| kubectl | functional-746000 kubectl --                                             | functional-746000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
	|         | --context functional-746000                                              |                      |         |                |                     |                     |
	|         | get pods                                                                 |                      |         |                |                     |                     |
	| start   | -p functional-746000                                                     | functional-746000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |                |                     |                     |
	|         | --wait=all                                                               |                      |         |                |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 16:28:40
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 16:28:40.911984    7308 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:28:40.912120    7308 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:28:40.912122    7308 out.go:304] Setting ErrFile to fd 2...
	I0327 16:28:40.912124    7308 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:28:40.912244    7308 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:28:40.913250    7308 out.go:298] Setting JSON to false
	I0327 16:28:40.929214    7308 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5291,"bootTime":1711576829,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 16:28:40.929271    7308 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 16:28:40.935389    7308 out.go:177] * [functional-746000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 16:28:40.943374    7308 out.go:177]   - MINIKUBE_LOCATION=18485
	I0327 16:28:40.948268    7308 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:28:40.943436    7308 notify.go:220] Checking for updates...
	I0327 16:28:40.952692    7308 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 16:28:40.955281    7308 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 16:28:40.958369    7308 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	I0327 16:28:40.961320    7308 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 16:28:40.964628    7308 config.go:182] Loaded profile config "functional-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:28:40.964676    7308 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 16:28:40.969306    7308 out.go:177] * Using the qemu2 driver based on existing profile
	I0327 16:28:40.976268    7308 start.go:297] selected driver: qemu2
	I0327 16:28:40.976270    7308 start.go:901] validating driver "qemu2" against &{Name:functional-746000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:functional-746000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:28:40.976339    7308 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 16:28:40.978598    7308 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 16:28:40.978642    7308 cni.go:84] Creating CNI manager for ""
	I0327 16:28:40.978649    7308 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 16:28:40.978707    7308 start.go:340] cluster config:
	{Name:functional-746000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-746000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:28:40.983007    7308 iso.go:125] acquiring lock: {Name:mk26a7f41e004242919e6d5076ec0f9645f820ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:28:40.990271    7308 out.go:177] * Starting "functional-746000" primary control-plane node in "functional-746000" cluster
	I0327 16:28:40.994311    7308 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 16:28:40.994330    7308 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 16:28:40.994343    7308 cache.go:56] Caching tarball of preloaded images
	I0327 16:28:40.994407    7308 preload.go:173] Found /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 16:28:40.994416    7308 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 16:28:40.994477    7308 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/functional-746000/config.json ...
	I0327 16:28:40.994930    7308 start.go:360] acquireMachinesLock for functional-746000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:28:40.994963    7308 start.go:364] duration metric: took 28.166µs to acquireMachinesLock for "functional-746000"
	I0327 16:28:40.994975    7308 start.go:96] Skipping create...Using existing machine configuration
	I0327 16:28:40.994979    7308 fix.go:54] fixHost starting: 
	I0327 16:28:40.995101    7308 fix.go:112] recreateIfNeeded on functional-746000: state=Stopped err=<nil>
	W0327 16:28:40.995110    7308 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 16:28:41.002366    7308 out.go:177] * Restarting existing qemu2 VM for "functional-746000" ...
	I0327 16:28:41.006315    7308 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/functional-746000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/functional-746000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/functional-746000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:ba:a4:21:63:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/functional-746000/disk.qcow2
	I0327 16:28:41.008486    7308 main.go:141] libmachine: STDOUT: 
	I0327 16:28:41.008512    7308 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:28:41.008544    7308 fix.go:56] duration metric: took 13.564458ms for fixHost
	I0327 16:28:41.008556    7308 start.go:83] releasing machines lock for "functional-746000", held for 13.582209ms
	W0327 16:28:41.008562    7308 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 16:28:41.008597    7308 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:28:41.008602    7308 start.go:728] Will try again in 5 seconds ...
	I0327 16:28:46.009558    7308 start.go:360] acquireMachinesLock for functional-746000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:28:46.009950    7308 start.go:364] duration metric: took 292.084µs to acquireMachinesLock for "functional-746000"
	I0327 16:28:46.010092    7308 start.go:96] Skipping create...Using existing machine configuration
	I0327 16:28:46.010102    7308 fix.go:54] fixHost starting: 
	I0327 16:28:46.010768    7308 fix.go:112] recreateIfNeeded on functional-746000: state=Stopped err=<nil>
	W0327 16:28:46.010789    7308 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 16:28:46.015111    7308 out.go:177] * Restarting existing qemu2 VM for "functional-746000" ...
	I0327 16:28:46.023282    7308 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/functional-746000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/functional-746000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/functional-746000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:ba:a4:21:63:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/functional-746000/disk.qcow2
	I0327 16:28:46.032746    7308 main.go:141] libmachine: STDOUT: 
	I0327 16:28:46.032804    7308 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:28:46.032879    7308 fix.go:56] duration metric: took 22.777584ms for fixHost
	I0327 16:28:46.032894    7308 start.go:83] releasing machines lock for "functional-746000", held for 22.909834ms
	W0327 16:28:46.033118    7308 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-746000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:28:46.040140    7308 out.go:177] 
	W0327 16:28:46.044001    7308 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 16:28:46.044026    7308 out.go:239] * 
	W0327 16:28:46.046235    7308 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 16:28:46.054076    7308 out.go:177] 
	
	
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
functional_test.go:1234: out/minikube-darwin-arm64 -p functional-746000 logs failed: exit status 83
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-614000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:26 PDT |                     |
|         | -p download-only-614000                                                  |                      |         |                |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |                |                     |                     |
|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:26 PDT | 27 Mar 24 16:26 PDT |
| delete  | -p download-only-614000                                                  | download-only-614000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:26 PDT | 27 Mar 24 16:26 PDT |
| start   | -o=json --download-only                                                  | download-only-652000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:26 PDT |                     |
|         | -p download-only-652000                                                  |                      |         |                |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
|         | --kubernetes-version=v1.29.3                                             |                      |         |                |                     |                     |
|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:27 PDT | 27 Mar 24 16:27 PDT |
| delete  | -p download-only-652000                                                  | download-only-652000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:27 PDT | 27 Mar 24 16:27 PDT |
| start   | -o=json --download-only                                                  | download-only-236000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:27 PDT |                     |
|         | -p download-only-236000                                                  |                      |         |                |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
|         | --kubernetes-version=v1.30.0-beta.0                                      |                      |         |                |                     |                     |
|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:27 PDT | 27 Mar 24 16:27 PDT |
| delete  | -p download-only-236000                                                  | download-only-236000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:27 PDT | 27 Mar 24 16:27 PDT |
| delete  | -p download-only-614000                                                  | download-only-614000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:27 PDT | 27 Mar 24 16:27 PDT |
| delete  | -p download-only-652000                                                  | download-only-652000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:27 PDT | 27 Mar 24 16:27 PDT |
| delete  | -p download-only-236000                                                  | download-only-236000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:27 PDT | 27 Mar 24 16:27 PDT |
| start   | --download-only -p                                                       | binary-mirror-029000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:27 PDT |                     |
|         | binary-mirror-029000                                                     |                      |         |                |                     |                     |
|         | --alsologtostderr                                                        |                      |         |                |                     |                     |
|         | --binary-mirror                                                          |                      |         |                |                     |                     |
|         | http://127.0.0.1:50984                                                   |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | -p binary-mirror-029000                                                  | binary-mirror-029000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:27 PDT | 27 Mar 24 16:27 PDT |
| addons  | enable dashboard -p                                                      | addons-295000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:27 PDT |                     |
|         | addons-295000                                                            |                      |         |                |                     |                     |
| addons  | disable dashboard -p                                                     | addons-295000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:27 PDT |                     |
|         | addons-295000                                                            |                      |         |                |                     |                     |
| start   | -p addons-295000 --wait=true                                             | addons-295000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:27 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |                |                     |                     |
|         | --addons=registry                                                        |                      |         |                |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |                |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |                |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |                |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |                |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |                |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |                |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |                |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |                |                     |                     |
|         | --addons=yakd --driver=qemu2                                             |                      |         |                |                     |                     |
|         |  --addons=ingress                                                        |                      |         |                |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |                |                     |                     |
| delete  | -p addons-295000                                                         | addons-295000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:27 PDT | 27 Mar 24 16:27 PDT |
| start   | -p nospam-432000 -n=1 --memory=2250 --wait=false                         | nospam-432000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:27 PDT |                     |
|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000 |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| start   | nospam-432000 --log_dir                                                  | nospam-432000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000           |                      |         |                |                     |                     |
|         | start --dry-run                                                          |                      |         |                |                     |                     |
| start   | nospam-432000 --log_dir                                                  | nospam-432000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000           |                      |         |                |                     |                     |
|         | start --dry-run                                                          |                      |         |                |                     |                     |
| start   | nospam-432000 --log_dir                                                  | nospam-432000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000           |                      |         |                |                     |                     |
|         | start --dry-run                                                          |                      |         |                |                     |                     |
| pause   | nospam-432000 --log_dir                                                  | nospam-432000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000           |                      |         |                |                     |                     |
|         | pause                                                                    |                      |         |                |                     |                     |
| pause   | nospam-432000 --log_dir                                                  | nospam-432000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000           |                      |         |                |                     |                     |
|         | pause                                                                    |                      |         |                |                     |                     |
| pause   | nospam-432000 --log_dir                                                  | nospam-432000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000           |                      |         |                |                     |                     |
|         | pause                                                                    |                      |         |                |                     |                     |
| unpause | nospam-432000 --log_dir                                                  | nospam-432000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000           |                      |         |                |                     |                     |
|         | unpause                                                                  |                      |         |                |                     |                     |
| unpause | nospam-432000 --log_dir                                                  | nospam-432000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000           |                      |         |                |                     |                     |
|         | unpause                                                                  |                      |         |                |                     |                     |
| unpause | nospam-432000 --log_dir                                                  | nospam-432000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000           |                      |         |                |                     |                     |
|         | unpause                                                                  |                      |         |                |                     |                     |
| stop    | nospam-432000 --log_dir                                                  | nospam-432000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT | 27 Mar 24 16:28 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000           |                      |         |                |                     |                     |
|         | stop                                                                     |                      |         |                |                     |                     |
| stop    | nospam-432000 --log_dir                                                  | nospam-432000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT | 27 Mar 24 16:28 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000           |                      |         |                |                     |                     |
|         | stop                                                                     |                      |         |                |                     |                     |
| stop    | nospam-432000 --log_dir                                                  | nospam-432000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT | 27 Mar 24 16:28 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000           |                      |         |                |                     |                     |
|         | stop                                                                     |                      |         |                |                     |                     |
| delete  | -p nospam-432000                                                         | nospam-432000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT | 27 Mar 24 16:28 PDT |
| start   | -p functional-746000                                                     | functional-746000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
|         | --memory=4000                                                            |                      |         |                |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |                |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |                |                     |                     |
| start   | -p functional-746000                                                     | functional-746000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |                |                     |                     |
| cache   | functional-746000 cache add                                              | functional-746000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT | 27 Mar 24 16:28 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |                |                     |                     |
| cache   | functional-746000 cache add                                              | functional-746000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT | 27 Mar 24 16:28 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |                |                     |                     |
| cache   | functional-746000 cache add                                              | functional-746000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT | 27 Mar 24 16:28 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| cache   | functional-746000 cache add                                              | functional-746000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT | 27 Mar 24 16:28 PDT |
|         | minikube-local-cache-test:functional-746000                              |                      |         |                |                     |                     |
| cache   | functional-746000 cache delete                                           | functional-746000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT | 27 Mar 24 16:28 PDT |
|         | minikube-local-cache-test:functional-746000                              |                      |         |                |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT | 27 Mar 24 16:28 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |                |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT | 27 Mar 24 16:28 PDT |
| ssh     | functional-746000 ssh sudo                                               | functional-746000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
|         | crictl images                                                            |                      |         |                |                     |                     |
| ssh     | functional-746000                                                        | functional-746000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |                |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| ssh     | functional-746000 ssh                                                    | functional-746000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |                |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| cache   | functional-746000 cache reload                                           | functional-746000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT | 27 Mar 24 16:28 PDT |
| ssh     | functional-746000 ssh                                                    | functional-746000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |                |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT | 27 Mar 24 16:28 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |                |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT | 27 Mar 24 16:28 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| kubectl | functional-746000 kubectl --                                             | functional-746000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
|         | --context functional-746000                                              |                      |         |                |                     |                     |
|         | get pods                                                                 |                      |         |                |                     |                     |
| start   | -p functional-746000                                                     | functional-746000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |                |                     |                     |
|         | --wait=all                                                               |                      |         |                |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/03/27 16:28:40
Running on machine: MacOS-M1-Agent-2
Binary: Built with gc go1.22.1 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0327 16:28:40.911984    7308 out.go:291] Setting OutFile to fd 1 ...
I0327 16:28:40.912120    7308 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 16:28:40.912122    7308 out.go:304] Setting ErrFile to fd 2...
I0327 16:28:40.912124    7308 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 16:28:40.912244    7308 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
I0327 16:28:40.913250    7308 out.go:298] Setting JSON to false
I0327 16:28:40.929214    7308 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5291,"bootTime":1711576829,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
W0327 16:28:40.929271    7308 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0327 16:28:40.935389    7308 out.go:177] * [functional-746000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
I0327 16:28:40.943374    7308 out.go:177]   - MINIKUBE_LOCATION=18485
I0327 16:28:40.948268    7308 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
I0327 16:28:40.943436    7308 notify.go:220] Checking for updates...
I0327 16:28:40.952692    7308 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0327 16:28:40.955281    7308 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0327 16:28:40.958369    7308 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
I0327 16:28:40.961320    7308 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0327 16:28:40.964628    7308 config.go:182] Loaded profile config "functional-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0327 16:28:40.964676    7308 driver.go:392] Setting default libvirt URI to qemu:///system
I0327 16:28:40.969306    7308 out.go:177] * Using the qemu2 driver based on existing profile
I0327 16:28:40.976268    7308 start.go:297] selected driver: qemu2
I0327 16:28:40.976270    7308 start.go:901] validating driver "qemu2" against &{Name:functional-746000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:functional-746000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0327 16:28:40.976339    7308 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0327 16:28:40.978598    7308 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0327 16:28:40.978642    7308 cni.go:84] Creating CNI manager for ""
I0327 16:28:40.978649    7308 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0327 16:28:40.978707    7308 start.go:340] cluster config:
{Name:functional-746000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-746000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0327 16:28:40.983007    7308 iso.go:125] acquiring lock: {Name:mk26a7f41e004242919e6d5076ec0f9645f820ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0327 16:28:40.990271    7308 out.go:177] * Starting "functional-746000" primary control-plane node in "functional-746000" cluster
I0327 16:28:40.994311    7308 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
I0327 16:28:40.994330    7308 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
I0327 16:28:40.994343    7308 cache.go:56] Caching tarball of preloaded images
I0327 16:28:40.994407    7308 preload.go:173] Found /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0327 16:28:40.994416    7308 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
I0327 16:28:40.994477    7308 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/functional-746000/config.json ...
I0327 16:28:40.994930    7308 start.go:360] acquireMachinesLock for functional-746000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0327 16:28:40.994963    7308 start.go:364] duration metric: took 28.166µs to acquireMachinesLock for "functional-746000"
I0327 16:28:40.994975    7308 start.go:96] Skipping create...Using existing machine configuration
I0327 16:28:40.994979    7308 fix.go:54] fixHost starting: 
I0327 16:28:40.995101    7308 fix.go:112] recreateIfNeeded on functional-746000: state=Stopped err=<nil>
W0327 16:28:40.995110    7308 fix.go:138] unexpected machine state, will restart: <nil>
I0327 16:28:41.002366    7308 out.go:177] * Restarting existing qemu2 VM for "functional-746000" ...
I0327 16:28:41.006315    7308 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/functional-746000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/functional-746000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/functional-746000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:ba:a4:21:63:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/functional-746000/disk.qcow2
I0327 16:28:41.008486    7308 main.go:141] libmachine: STDOUT: 
I0327 16:28:41.008512    7308 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0327 16:28:41.008544    7308 fix.go:56] duration metric: took 13.564458ms for fixHost
I0327 16:28:41.008556    7308 start.go:83] releasing machines lock for "functional-746000", held for 13.582209ms
W0327 16:28:41.008562    7308 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0327 16:28:41.008597    7308 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0327 16:28:41.008602    7308 start.go:728] Will try again in 5 seconds ...
I0327 16:28:46.009558    7308 start.go:360] acquireMachinesLock for functional-746000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0327 16:28:46.009950    7308 start.go:364] duration metric: took 292.084µs to acquireMachinesLock for "functional-746000"
I0327 16:28:46.010092    7308 start.go:96] Skipping create...Using existing machine configuration
I0327 16:28:46.010102    7308 fix.go:54] fixHost starting: 
I0327 16:28:46.010768    7308 fix.go:112] recreateIfNeeded on functional-746000: state=Stopped err=<nil>
W0327 16:28:46.010789    7308 fix.go:138] unexpected machine state, will restart: <nil>
I0327 16:28:46.015111    7308 out.go:177] * Restarting existing qemu2 VM for "functional-746000" ...
I0327 16:28:46.023282    7308 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/functional-746000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/functional-746000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/functional-746000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:ba:a4:21:63:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/functional-746000/disk.qcow2
I0327 16:28:46.032746    7308 main.go:141] libmachine: STDOUT: 
I0327 16:28:46.032804    7308 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0327 16:28:46.032879    7308 fix.go:56] duration metric: took 22.777584ms for fixHost
I0327 16:28:46.032894    7308 start.go:83] releasing machines lock for "functional-746000", held for 22.909834ms
W0327 16:28:46.033118    7308 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-746000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0327 16:28:46.040140    7308 out.go:177] 
W0327 16:28:46.044001    7308 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0327 16:28:46.044026    7308 out.go:239] * 
W0327 16:28:46.046235    7308 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0327 16:28:46.054076    7308 out.go:177] 

                                                
                                                

                                                
                                                
* The control-plane node functional-746000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-746000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd3853423499/001/logs.txt
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-614000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:26 PDT |                     |
|         | -p download-only-614000                                                  |                      |         |                |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |                |                     |                     |
|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:26 PDT | 27 Mar 24 16:26 PDT |
| delete  | -p download-only-614000                                                  | download-only-614000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:26 PDT | 27 Mar 24 16:26 PDT |
| start   | -o=json --download-only                                                  | download-only-652000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:26 PDT |                     |
|         | -p download-only-652000                                                  |                      |         |                |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
|         | --kubernetes-version=v1.29.3                                             |                      |         |                |                     |                     |
|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:27 PDT | 27 Mar 24 16:27 PDT |
| delete  | -p download-only-652000                                                  | download-only-652000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:27 PDT | 27 Mar 24 16:27 PDT |
| start   | -o=json --download-only                                                  | download-only-236000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:27 PDT |                     |
|         | -p download-only-236000                                                  |                      |         |                |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |                |                     |                     |
|         | --kubernetes-version=v1.30.0-beta.0                                      |                      |         |                |                     |                     |
|         | --container-runtime=docker                                               |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:27 PDT | 27 Mar 24 16:27 PDT |
| delete  | -p download-only-236000                                                  | download-only-236000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:27 PDT | 27 Mar 24 16:27 PDT |
| delete  | -p download-only-614000                                                  | download-only-614000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:27 PDT | 27 Mar 24 16:27 PDT |
| delete  | -p download-only-652000                                                  | download-only-652000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:27 PDT | 27 Mar 24 16:27 PDT |
| delete  | -p download-only-236000                                                  | download-only-236000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:27 PDT | 27 Mar 24 16:27 PDT |
| start   | --download-only -p                                                       | binary-mirror-029000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:27 PDT |                     |
|         | binary-mirror-029000                                                     |                      |         |                |                     |                     |
|         | --alsologtostderr                                                        |                      |         |                |                     |                     |
|         | --binary-mirror                                                          |                      |         |                |                     |                     |
|         | http://127.0.0.1:50984                                                   |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| delete  | -p binary-mirror-029000                                                  | binary-mirror-029000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:27 PDT | 27 Mar 24 16:27 PDT |
| addons  | enable dashboard -p                                                      | addons-295000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:27 PDT |                     |
|         | addons-295000                                                            |                      |         |                |                     |                     |
| addons  | disable dashboard -p                                                     | addons-295000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:27 PDT |                     |
|         | addons-295000                                                            |                      |         |                |                     |                     |
| start   | -p addons-295000 --wait=true                                             | addons-295000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:27 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |                |                     |                     |
|         | --addons=registry                                                        |                      |         |                |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |                |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |                |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |                |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |                |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |                |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |                |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |                |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |                |                     |                     |
|         | --addons=yakd --driver=qemu2                                             |                      |         |                |                     |                     |
|         |  --addons=ingress                                                        |                      |         |                |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |                |                     |                     |
| delete  | -p addons-295000                                                         | addons-295000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:27 PDT | 27 Mar 24 16:27 PDT |
| start   | -p nospam-432000 -n=1 --memory=2250 --wait=false                         | nospam-432000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:27 PDT |                     |
|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000 |                      |         |                |                     |                     |
|         | --driver=qemu2                                                           |                      |         |                |                     |                     |
| start   | nospam-432000 --log_dir                                                  | nospam-432000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000           |                      |         |                |                     |                     |
|         | start --dry-run                                                          |                      |         |                |                     |                     |
| start   | nospam-432000 --log_dir                                                  | nospam-432000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000           |                      |         |                |                     |                     |
|         | start --dry-run                                                          |                      |         |                |                     |                     |
| start   | nospam-432000 --log_dir                                                  | nospam-432000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000           |                      |         |                |                     |                     |
|         | start --dry-run                                                          |                      |         |                |                     |                     |
| pause   | nospam-432000 --log_dir                                                  | nospam-432000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000           |                      |         |                |                     |                     |
|         | pause                                                                    |                      |         |                |                     |                     |
| pause   | nospam-432000 --log_dir                                                  | nospam-432000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000           |                      |         |                |                     |                     |
|         | pause                                                                    |                      |         |                |                     |                     |
| pause   | nospam-432000 --log_dir                                                  | nospam-432000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000           |                      |         |                |                     |                     |
|         | pause                                                                    |                      |         |                |                     |                     |
| unpause | nospam-432000 --log_dir                                                  | nospam-432000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000           |                      |         |                |                     |                     |
|         | unpause                                                                  |                      |         |                |                     |                     |
| unpause | nospam-432000 --log_dir                                                  | nospam-432000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000           |                      |         |                |                     |                     |
|         | unpause                                                                  |                      |         |                |                     |                     |
| unpause | nospam-432000 --log_dir                                                  | nospam-432000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000           |                      |         |                |                     |                     |
|         | unpause                                                                  |                      |         |                |                     |                     |
| stop    | nospam-432000 --log_dir                                                  | nospam-432000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT | 27 Mar 24 16:28 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000           |                      |         |                |                     |                     |
|         | stop                                                                     |                      |         |                |                     |                     |
| stop    | nospam-432000 --log_dir                                                  | nospam-432000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT | 27 Mar 24 16:28 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000           |                      |         |                |                     |                     |
|         | stop                                                                     |                      |         |                |                     |                     |
| stop    | nospam-432000 --log_dir                                                  | nospam-432000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT | 27 Mar 24 16:28 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000           |                      |         |                |                     |                     |
|         | stop                                                                     |                      |         |                |                     |                     |
| delete  | -p nospam-432000                                                         | nospam-432000        | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT | 27 Mar 24 16:28 PDT |
| start   | -p functional-746000                                                     | functional-746000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
|         | --memory=4000                                                            |                      |         |                |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |                |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |                |                     |                     |
| start   | -p functional-746000                                                     | functional-746000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |                |                     |                     |
| cache   | functional-746000 cache add                                              | functional-746000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT | 27 Mar 24 16:28 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |                |                     |                     |
| cache   | functional-746000 cache add                                              | functional-746000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT | 27 Mar 24 16:28 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |                |                     |                     |
| cache   | functional-746000 cache add                                              | functional-746000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT | 27 Mar 24 16:28 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| cache   | functional-746000 cache add                                              | functional-746000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT | 27 Mar 24 16:28 PDT |
|         | minikube-local-cache-test:functional-746000                              |                      |         |                |                     |                     |
| cache   | functional-746000 cache delete                                           | functional-746000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT | 27 Mar 24 16:28 PDT |
|         | minikube-local-cache-test:functional-746000                              |                      |         |                |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT | 27 Mar 24 16:28 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |                |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT | 27 Mar 24 16:28 PDT |
| ssh     | functional-746000 ssh sudo                                               | functional-746000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
|         | crictl images                                                            |                      |         |                |                     |                     |
| ssh     | functional-746000                                                        | functional-746000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |                |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| ssh     | functional-746000 ssh                                                    | functional-746000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |                |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| cache   | functional-746000 cache reload                                           | functional-746000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT | 27 Mar 24 16:28 PDT |
| ssh     | functional-746000 ssh                                                    | functional-746000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |                |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT | 27 Mar 24 16:28 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |                |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT | 27 Mar 24 16:28 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |                |                     |                     |
| kubectl | functional-746000 kubectl --                                             | functional-746000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
|         | --context functional-746000                                              |                      |         |                |                     |                     |
|         | get pods                                                                 |                      |         |                |                     |                     |
| start   | -p functional-746000                                                     | functional-746000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:28 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |                |                     |                     |
|         | --wait=all                                                               |                      |         |                |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|----------------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/03/27 16:28:40
Running on machine: MacOS-M1-Agent-2
Binary: Built with gc go1.22.1 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0327 16:28:40.911984    7308 out.go:291] Setting OutFile to fd 1 ...
I0327 16:28:40.912120    7308 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 16:28:40.912122    7308 out.go:304] Setting ErrFile to fd 2...
I0327 16:28:40.912124    7308 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 16:28:40.912244    7308 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
I0327 16:28:40.913250    7308 out.go:298] Setting JSON to false
I0327 16:28:40.929214    7308 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5291,"bootTime":1711576829,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
W0327 16:28:40.929271    7308 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0327 16:28:40.935389    7308 out.go:177] * [functional-746000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
I0327 16:28:40.943374    7308 out.go:177]   - MINIKUBE_LOCATION=18485
I0327 16:28:40.948268    7308 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
I0327 16:28:40.943436    7308 notify.go:220] Checking for updates...
I0327 16:28:40.952692    7308 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0327 16:28:40.955281    7308 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0327 16:28:40.958369    7308 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
I0327 16:28:40.961320    7308 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0327 16:28:40.964628    7308 config.go:182] Loaded profile config "functional-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0327 16:28:40.964676    7308 driver.go:392] Setting default libvirt URI to qemu:///system
I0327 16:28:40.969306    7308 out.go:177] * Using the qemu2 driver based on existing profile
I0327 16:28:40.976268    7308 start.go:297] selected driver: qemu2
I0327 16:28:40.976270    7308 start.go:901] validating driver "qemu2" against &{Name:functional-746000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:functional-746000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0327 16:28:40.976339    7308 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0327 16:28:40.978598    7308 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0327 16:28:40.978642    7308 cni.go:84] Creating CNI manager for ""
I0327 16:28:40.978649    7308 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0327 16:28:40.978707    7308 start.go:340] cluster config:
{Name:functional-746000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-746000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0327 16:28:40.983007    7308 iso.go:125] acquiring lock: {Name:mk26a7f41e004242919e6d5076ec0f9645f820ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0327 16:28:40.990271    7308 out.go:177] * Starting "functional-746000" primary control-plane node in "functional-746000" cluster
I0327 16:28:40.994311    7308 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
I0327 16:28:40.994330    7308 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
I0327 16:28:40.994343    7308 cache.go:56] Caching tarball of preloaded images
I0327 16:28:40.994407    7308 preload.go:173] Found /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0327 16:28:40.994416    7308 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
I0327 16:28:40.994477    7308 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/functional-746000/config.json ...
I0327 16:28:40.994930    7308 start.go:360] acquireMachinesLock for functional-746000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0327 16:28:40.994963    7308 start.go:364] duration metric: took 28.166µs to acquireMachinesLock for "functional-746000"
I0327 16:28:40.994975    7308 start.go:96] Skipping create...Using existing machine configuration
I0327 16:28:40.994979    7308 fix.go:54] fixHost starting: 
I0327 16:28:40.995101    7308 fix.go:112] recreateIfNeeded on functional-746000: state=Stopped err=<nil>
W0327 16:28:40.995110    7308 fix.go:138] unexpected machine state, will restart: <nil>
I0327 16:28:41.002366    7308 out.go:177] * Restarting existing qemu2 VM for "functional-746000" ...
I0327 16:28:41.006315    7308 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/functional-746000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/functional-746000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/functional-746000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:ba:a4:21:63:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/functional-746000/disk.qcow2
I0327 16:28:41.008486    7308 main.go:141] libmachine: STDOUT: 
I0327 16:28:41.008512    7308 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0327 16:28:41.008544    7308 fix.go:56] duration metric: took 13.564458ms for fixHost
I0327 16:28:41.008556    7308 start.go:83] releasing machines lock for "functional-746000", held for 13.582209ms
W0327 16:28:41.008562    7308 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0327 16:28:41.008597    7308 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0327 16:28:41.008602    7308 start.go:728] Will try again in 5 seconds ...
I0327 16:28:46.009558    7308 start.go:360] acquireMachinesLock for functional-746000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0327 16:28:46.009950    7308 start.go:364] duration metric: took 292.084µs to acquireMachinesLock for "functional-746000"
I0327 16:28:46.010092    7308 start.go:96] Skipping create...Using existing machine configuration
I0327 16:28:46.010102    7308 fix.go:54] fixHost starting: 
I0327 16:28:46.010768    7308 fix.go:112] recreateIfNeeded on functional-746000: state=Stopped err=<nil>
W0327 16:28:46.010789    7308 fix.go:138] unexpected machine state, will restart: <nil>
I0327 16:28:46.015111    7308 out.go:177] * Restarting existing qemu2 VM for "functional-746000" ...
I0327 16:28:46.023282    7308 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/functional-746000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/functional-746000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/functional-746000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:ba:a4:21:63:1b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/functional-746000/disk.qcow2
I0327 16:28:46.032746    7308 main.go:141] libmachine: STDOUT: 
I0327 16:28:46.032804    7308 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0327 16:28:46.032879    7308 fix.go:56] duration metric: took 22.777584ms for fixHost
I0327 16:28:46.032894    7308 start.go:83] releasing machines lock for "functional-746000", held for 22.909834ms
W0327 16:28:46.033118    7308 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-746000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0327 16:28:46.040140    7308 out.go:177] 
W0327 16:28:46.044001    7308 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0327 16:28:46.044026    7308 out.go:239] * 
W0327 16:28:46.046235    7308 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0327 16:28:46.054076    7308 out.go:177] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-746000 apply -f testdata/invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-746000 apply -f testdata/invalidsvc.yaml: exit status 1 (27.564375ms)

                                                
                                                
** stderr ** 
	error: context "functional-746000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-746000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-746000 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-746000 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-746000 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-746000 --alsologtostderr -v=1] stderr:
I0327 16:29:40.441840    7628 out.go:291] Setting OutFile to fd 1 ...
I0327 16:29:40.442218    7628 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 16:29:40.442222    7628 out.go:304] Setting ErrFile to fd 2...
I0327 16:29:40.442225    7628 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 16:29:40.442387    7628 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
I0327 16:29:40.442609    7628 mustload.go:65] Loading cluster: functional-746000
I0327 16:29:40.442810    7628 config.go:182] Loaded profile config "functional-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0327 16:29:40.447844    7628 out.go:177] * The control-plane node functional-746000 host is not running: state=Stopped
I0327 16:29:40.451685    7628 out.go:177]   To start a cluster, run: "minikube start -p functional-746000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-746000 -n functional-746000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-746000 -n functional-746000: exit status 7 (43.8135ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-746000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 status
functional_test.go:850: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 status: exit status 7 (31.81ms)

                                                
                                                
-- stdout --
	functional-746000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:852: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-746000 status" : exit status 7
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (32.053959ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:858: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-746000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 status -o json
functional_test.go:868: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 status -o json: exit status 7 (31.738542ms)

                                                
                                                
-- stdout --
	{"Name":"functional-746000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:870: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-746000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-746000 -n functional-746000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-746000 -n functional-746000: exit status 7 (32.439208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-746000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-746000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1623: (dbg) Non-zero exit: kubectl --context functional-746000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.903125ms)

                                                
                                                
** stderr ** 
	error: context "functional-746000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1629: failed to create hello-node deployment with this command "kubectl --context functional-746000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-746000 describe po hello-node-connect
functional_test.go:1598: (dbg) Non-zero exit: kubectl --context functional-746000 describe po hello-node-connect: exit status 1 (26.738208ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-746000

                                                
                                                
** /stderr **
functional_test.go:1600: "kubectl --context functional-746000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1602: hello-node pod describe:
functional_test.go:1604: (dbg) Run:  kubectl --context functional-746000 logs -l app=hello-node-connect
functional_test.go:1604: (dbg) Non-zero exit: kubectl --context functional-746000 logs -l app=hello-node-connect: exit status 1 (27.188333ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-746000

                                                
                                                
** /stderr **
functional_test.go:1606: "kubectl --context functional-746000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1608: hello-node logs:
functional_test.go:1610: (dbg) Run:  kubectl --context functional-746000 describe svc hello-node-connect
functional_test.go:1610: (dbg) Non-zero exit: kubectl --context functional-746000 describe svc hello-node-connect: exit status 1 (26.79775ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-746000

                                                
                                                
** /stderr **
functional_test.go:1612: "kubectl --context functional-746000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1614: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-746000 -n functional-746000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-746000 -n functional-746000: exit status 7 (31.929958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-746000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-746000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-746000 -n functional-746000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-746000 -n functional-746000: exit status 7 (32.078583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-746000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 ssh "echo hello"
functional_test.go:1721: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 ssh "echo hello": exit status 83 (46.935125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
functional_test.go:1726: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-746000 ssh \"echo hello\"" : exit status 83
functional_test.go:1730: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-746000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-746000\"\n"*. args "out/minikube-darwin-arm64 -p functional-746000 ssh \"echo hello\""
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 ssh "cat /etc/hostname": exit status 83 (41.618916ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
functional_test.go:1744: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-746000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1748: expected minikube ssh command output to be -"functional-746000"- but got *"* The control-plane node functional-746000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-746000\"\n"*. args "out/minikube-darwin-arm64 -p functional-746000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-746000 -n functional-746000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-746000 -n functional-746000: exit status 7 (32.968167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-746000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (55.224125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-746000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 ssh -n functional-746000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 ssh -n functional-746000 "sudo cat /home/docker/cp-test.txt": exit status 83 (43.173708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-746000 ssh -n functional-746000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-746000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-746000\"\n",
  }, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 cp functional-746000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd574005683/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 cp functional-746000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd574005683/001/cp-test.txt: exit status 83 (41.261709ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-746000 cp functional-746000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd574005683/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 ssh -n functional-746000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 ssh -n functional-746000 "sudo cat /home/docker/cp-test.txt": exit status 83 (43.888458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-746000 ssh -n functional-746000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd574005683/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"* The control-plane node functional-746000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-746000\"\n",
+ 	"",
  )
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (43.904833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-746000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 ssh -n functional-746000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 ssh -n functional-746000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (46.86075ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-746000 ssh -n functional-746000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-746000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-746000\"\n",
  }, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/6926/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 ssh "sudo cat /etc/test/nested/copy/6926/hosts"
functional_test.go:1927: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 ssh "sudo cat /etc/test/nested/copy/6926/hosts": exit status 83 (40.611709ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
functional_test.go:1929: out/minikube-darwin-arm64 -p functional-746000 ssh "sudo cat /etc/test/nested/copy/6926/hosts" failed: exit status 83
functional_test.go:1932: file sync test content: * The control-plane node functional-746000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-746000"
functional_test.go:1942: /etc/sync.test content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-746000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-746000\"\n",
  }, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-746000 -n functional-746000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-746000 -n functional-746000: exit status 7 (32.581291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-746000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/6926.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 ssh "sudo cat /etc/ssl/certs/6926.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 ssh "sudo cat /etc/ssl/certs/6926.pem": exit status 83 (43.698667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/6926.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-746000 ssh \"sudo cat /etc/ssl/certs/6926.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/6926.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-746000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-746000"
  	"""
  )
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/6926.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 ssh "sudo cat /usr/share/ca-certificates/6926.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 ssh "sudo cat /usr/share/ca-certificates/6926.pem": exit status 83 (47.610916ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/usr/share/ca-certificates/6926.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-746000 ssh \"sudo cat /usr/share/ca-certificates/6926.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/6926.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-746000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-746000"
  	"""
  )
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (44.789125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-746000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-746000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-746000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/69262.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 ssh "sudo cat /etc/ssl/certs/69262.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 ssh "sudo cat /etc/ssl/certs/69262.pem": exit status 83 (41.646625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/69262.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-746000 ssh \"sudo cat /etc/ssl/certs/69262.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/69262.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-746000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-746000"
  	"""
  )
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/69262.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 ssh "sudo cat /usr/share/ca-certificates/69262.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 ssh "sudo cat /usr/share/ca-certificates/69262.pem": exit status 83 (42.791125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/usr/share/ca-certificates/69262.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-746000 ssh \"sudo cat /usr/share/ca-certificates/69262.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/69262.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-746000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-746000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (42.709208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-746000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-746000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-746000"
  	"""
  )
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-746000 -n functional-746000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-746000 -n functional-746000: exit status 7 (31.98ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-746000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-746000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-746000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (26.26725ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-746000

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-746000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-746000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-746000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-746000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-746000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-746000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-746000 -n functional-746000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-746000 -n functional-746000: exit status 7 (31.929ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-746000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 ssh "sudo systemctl is-active crio": exit status 83 (42.32675ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
functional_test.go:2026: output of 
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2029: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-746000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-746000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 version -o=json --components
functional_test.go:2266: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 version -o=json --components: exit status 83 (42.828833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
functional_test.go:2268: error version: exit status 83
functional_test.go:2273: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-746000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-746000"
functional_test.go:2273: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-746000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-746000"
functional_test.go:2273: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-746000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-746000"
functional_test.go:2273: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-746000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-746000"
functional_test.go:2273: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-746000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-746000"
functional_test.go:2273: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-746000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-746000"
functional_test.go:2273: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-746000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-746000"
functional_test.go:2273: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-746000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-746000"
functional_test.go:2273: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-746000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-746000"
functional_test.go:2273: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-746000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-746000"
--- FAIL: TestFunctional/parallel/Version/components (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-746000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-746000 image ls --format short --alsologtostderr:
I0327 16:29:40.861150    7643 out.go:291] Setting OutFile to fd 1 ...
I0327 16:29:40.861323    7643 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 16:29:40.861326    7643 out.go:304] Setting ErrFile to fd 2...
I0327 16:29:40.861328    7643 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 16:29:40.861456    7643 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
I0327 16:29:40.861874    7643 config.go:182] Loaded profile config "functional-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0327 16:29:40.861929    7643 config.go:182] Loaded profile config "functional-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-746000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-746000 image ls --format table --alsologtostderr:
I0327 16:29:41.092960    7655 out.go:291] Setting OutFile to fd 1 ...
I0327 16:29:41.093104    7655 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 16:29:41.093107    7655 out.go:304] Setting ErrFile to fd 2...
I0327 16:29:41.093110    7655 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 16:29:41.093240    7655 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
I0327 16:29:41.093630    7655 config.go:182] Loaded profile config "functional-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0327 16:29:41.093689    7655 config.go:182] Loaded profile config "functional-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
functional_test.go:274: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-746000 image ls --format json --alsologtostderr:
[]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-746000 image ls --format json --alsologtostderr:
I0327 16:29:41.055683    7653 out.go:291] Setting OutFile to fd 1 ...
I0327 16:29:41.055820    7653 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 16:29:41.055823    7653 out.go:304] Setting ErrFile to fd 2...
I0327 16:29:41.055829    7653 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 16:29:41.055974    7653 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
I0327 16:29:41.056380    7653 config.go:182] Loaded profile config "functional-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0327 16:29:41.056445    7653 config.go:182] Loaded profile config "functional-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
functional_test.go:274: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-746000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-746000 image ls --format yaml --alsologtostderr:
I0327 16:29:40.898394    7645 out.go:291] Setting OutFile to fd 1 ...
I0327 16:29:40.898542    7645 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 16:29:40.898545    7645 out.go:304] Setting ErrFile to fd 2...
I0327 16:29:40.898547    7645 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 16:29:40.898666    7645 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
I0327 16:29:40.899065    7645 config.go:182] Loaded profile config "functional-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0327 16:29:40.899134    7645 config.go:182] Loaded profile config "functional-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
functional_test.go:274: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 ssh pgrep buildkitd: exit status 83 (43.9215ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 image build -t localhost/my-image:functional-746000 testdata/build --alsologtostderr
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-746000 image build -t localhost/my-image:functional-746000 testdata/build --alsologtostderr:
I0327 16:29:40.979303    7649 out.go:291] Setting OutFile to fd 1 ...
I0327 16:29:40.979876    7649 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 16:29:40.979885    7649 out.go:304] Setting ErrFile to fd 2...
I0327 16:29:40.979888    7649 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 16:29:40.980253    7649 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
I0327 16:29:40.980811    7649 config.go:182] Loaded profile config "functional-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0327 16:29:40.981248    7649 config.go:182] Loaded profile config "functional-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0327 16:29:40.981492    7649 build_images.go:133] succeeded building to: 
I0327 16:29:40.981496    7649 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 image ls
functional_test.go:442: expected "localhost/my-image:functional-746000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-746000 docker-env) && out/minikube-darwin-arm64 status -p functional-746000"
functional_test.go:495: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-746000 docker-env) && out/minikube-darwin-arm64 status -p functional-746000": exit status 1 (46.046458ms)
functional_test.go:501: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 update-context --alsologtostderr -v=2: exit status 83 (42.858292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:29:40.725766    7637 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:29:40.726138    7637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:29:40.726141    7637 out.go:304] Setting ErrFile to fd 2...
	I0327 16:29:40.726144    7637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:29:40.726310    7637 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:29:40.726524    7637 mustload.go:65] Loading cluster: functional-746000
	I0327 16:29:40.726731    7637 config.go:182] Loaded profile config "functional-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:29:40.729685    7637 out.go:177] * The control-plane node functional-746000 host is not running: state=Stopped
	I0327 16:29:40.733586    7637 out.go:177]   To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-746000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-746000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-746000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 update-context --alsologtostderr -v=2: exit status 83 (46.645417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:29:40.815051    7641 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:29:40.815215    7641 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:29:40.815218    7641 out.go:304] Setting ErrFile to fd 2...
	I0327 16:29:40.815221    7641 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:29:40.815343    7641 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:29:40.815588    7641 mustload.go:65] Loading cluster: functional-746000
	I0327 16:29:40.815827    7641 config.go:182] Loaded profile config "functional-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:29:40.821557    7641 out.go:177] * The control-plane node functional-746000 host is not running: state=Stopped
	I0327 16:29:40.825542    7641 out.go:177]   To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-746000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-746000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-746000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 update-context --alsologtostderr -v=2: exit status 83 (44.556959ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:29:40.769193    7639 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:29:40.769363    7639 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:29:40.769366    7639 out.go:304] Setting ErrFile to fd 2...
	I0327 16:29:40.769369    7639 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:29:40.769488    7639 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:29:40.769728    7639 mustload.go:65] Loading cluster: functional-746000
	I0327 16:29:40.769914    7639 config.go:182] Loaded profile config "functional-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:29:40.774614    7639 out.go:177] * The control-plane node functional-746000 host is not running: state=Stopped
	I0327 16:29:40.778641    7639 out.go:177]   To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-746000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-746000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-746000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-746000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1433: (dbg) Non-zero exit: kubectl --context functional-746000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.957708ms)

                                                
                                                
** stderr ** 
	error: context "functional-746000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1439: failed to create hello-node deployment with this command "kubectl --context functional-746000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 service list
functional_test.go:1455: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 service list: exit status 83 (47.8265ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
functional_test.go:1457: failed to do service list. args "out/minikube-darwin-arm64 -p functional-746000 service list" : exit status 83
functional_test.go:1460: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-746000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-746000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 service list -o json
functional_test.go:1485: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 service list -o json: exit status 83 (42.791125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
functional_test.go:1487: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-746000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 service --namespace=default --https --url hello-node: exit status 83 (43.901167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
functional_test.go:1507: failed to get service url. args "out/minikube-darwin-arm64 -p functional-746000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 service hello-node --url --format={{.IP}}: exit status 83 (43.846041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-746000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1544: "* The control-plane node functional-746000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-746000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 service hello-node --url: exit status 83 (44.7965ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
functional_test.go:1557: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-746000 service hello-node --url": exit status 83
functional_test.go:1561: found endpoint for hello-node: * The control-plane node functional-746000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-746000"
functional_test.go:1565: failed to parse "* The control-plane node functional-746000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-746000\"": parse "* The control-plane node functional-746000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-746000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-746000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-746000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I0327 16:28:49.072783    7427 out.go:291] Setting OutFile to fd 1 ...
I0327 16:28:49.073116    7427 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 16:28:49.073120    7427 out.go:304] Setting ErrFile to fd 2...
I0327 16:28:49.073122    7427 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 16:28:49.073319    7427 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
I0327 16:28:49.073547    7427 mustload.go:65] Loading cluster: functional-746000
I0327 16:28:49.073747    7427 config.go:182] Loaded profile config "functional-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0327 16:28:49.077394    7427 out.go:177] * The control-plane node functional-746000 host is not running: state=Stopped
I0327 16:28:49.085427    7427 out.go:177]   To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
stdout: * The control-plane node functional-746000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-746000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-746000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-746000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-746000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-746000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 7426: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-746000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-746000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-746000": client config: context "functional-746000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (108.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-746000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-746000 get svc nginx-svc: exit status 1 (69.119459ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-746000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-746000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (108.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 image load --daemon gcr.io/google-containers/addon-resizer:functional-746000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-746000 image load --daemon gcr.io/google-containers/addon-resizer:functional-746000 --alsologtostderr: (1.275844666s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-746000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 image load --daemon gcr.io/google-containers/addon-resizer:functional-746000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-746000 image load --daemon gcr.io/google-containers/addon-resizer:functional-746000 --alsologtostderr: (1.360208875s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-746000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (5.46898325s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-746000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 image load --daemon gcr.io/google-containers/addon-resizer:functional-746000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-746000 image load --daemon gcr.io/google-containers/addon-resizer:functional-746000 --alsologtostderr: (1.174292416s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-746000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 image save gcr.io/google-containers/addon-resizer:functional-746000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/Users/jenkins/workspace/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-746000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.036277292s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 16 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (22.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (22.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (9.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-772000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-772000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (9.863721041s)

                                                
                                                
-- stdout --
	* [ha-772000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-772000" primary control-plane node in "ha-772000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-772000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:31:25.971480    7711 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:31:25.971595    7711 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:31:25.971599    7711 out.go:304] Setting ErrFile to fd 2...
	I0327 16:31:25.971601    7711 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:31:25.971738    7711 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:31:25.972840    7711 out.go:298] Setting JSON to false
	I0327 16:31:25.989020    7711 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5456,"bootTime":1711576829,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 16:31:25.989081    7711 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 16:31:25.993808    7711 out.go:177] * [ha-772000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 16:31:25.999671    7711 out.go:177]   - MINIKUBE_LOCATION=18485
	I0327 16:31:26.003709    7711 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:31:25.999731    7711 notify.go:220] Checking for updates...
	I0327 16:31:26.009649    7711 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 16:31:26.012699    7711 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 16:31:26.014186    7711 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	I0327 16:31:26.017689    7711 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 16:31:26.020793    7711 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 16:31:26.024465    7711 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 16:31:26.031689    7711 start.go:297] selected driver: qemu2
	I0327 16:31:26.031694    7711 start.go:901] validating driver "qemu2" against <nil>
	I0327 16:31:26.031700    7711 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 16:31:26.033986    7711 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 16:31:26.037685    7711 out.go:177] * Automatically selected the socket_vmnet network
	I0327 16:31:26.040762    7711 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 16:31:26.040818    7711 cni.go:84] Creating CNI manager for ""
	I0327 16:31:26.040824    7711 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0327 16:31:26.040829    7711 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0327 16:31:26.040857    7711 start.go:340] cluster config:
	{Name:ha-772000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-772000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:31:26.045274    7711 iso.go:125] acquiring lock: {Name:mk26a7f41e004242919e6d5076ec0f9645f820ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:31:26.051670    7711 out.go:177] * Starting "ha-772000" primary control-plane node in "ha-772000" cluster
	I0327 16:31:26.055668    7711 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 16:31:26.055684    7711 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 16:31:26.055697    7711 cache.go:56] Caching tarball of preloaded images
	I0327 16:31:26.055768    7711 preload.go:173] Found /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 16:31:26.055774    7711 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 16:31:26.056040    7711 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/ha-772000/config.json ...
	I0327 16:31:26.056052    7711 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/ha-772000/config.json: {Name:mk78b4105e315974cd1ceec3003e3dee8c8b4e0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 16:31:26.056273    7711 start.go:360] acquireMachinesLock for ha-772000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:31:26.056306    7711 start.go:364] duration metric: took 27.417µs to acquireMachinesLock for "ha-772000"
	I0327 16:31:26.056319    7711 start.go:93] Provisioning new machine with config: &{Name:ha-772000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.29.3 ClusterName:ha-772000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 16:31:26.056354    7711 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 16:31:26.064685    7711 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 16:31:26.082087    7711 start.go:159] libmachine.API.Create for "ha-772000" (driver="qemu2")
	I0327 16:31:26.082139    7711 client.go:168] LocalClient.Create starting
	I0327 16:31:26.082194    7711 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem
	I0327 16:31:26.082226    7711 main.go:141] libmachine: Decoding PEM data...
	I0327 16:31:26.082238    7711 main.go:141] libmachine: Parsing certificate...
	I0327 16:31:26.082281    7711 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem
	I0327 16:31:26.082304    7711 main.go:141] libmachine: Decoding PEM data...
	I0327 16:31:26.082310    7711 main.go:141] libmachine: Parsing certificate...
	I0327 16:31:26.082701    7711 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18485-6511/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0327 16:31:26.271818    7711 main.go:141] libmachine: Creating SSH key...
	I0327 16:31:26.346363    7711 main.go:141] libmachine: Creating Disk image...
	I0327 16:31:26.346370    7711 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 16:31:26.346539    7711 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/ha-772000/disk.qcow2.raw /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/ha-772000/disk.qcow2
	I0327 16:31:26.359014    7711 main.go:141] libmachine: STDOUT: 
	I0327 16:31:26.359039    7711 main.go:141] libmachine: STDERR: 
	I0327 16:31:26.359104    7711 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/ha-772000/disk.qcow2 +20000M
	I0327 16:31:26.369942    7711 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 16:31:26.369957    7711 main.go:141] libmachine: STDERR: 
	I0327 16:31:26.369981    7711 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/ha-772000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/ha-772000/disk.qcow2
	I0327 16:31:26.369985    7711 main.go:141] libmachine: Starting QEMU VM...
	I0327 16:31:26.370012    7711 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/ha-772000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/ha-772000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/ha-772000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:d0:ea:68:78:4f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/ha-772000/disk.qcow2
	I0327 16:31:26.371785    7711 main.go:141] libmachine: STDOUT: 
	I0327 16:31:26.371801    7711 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:31:26.371827    7711 client.go:171] duration metric: took 289.682292ms to LocalClient.Create
	I0327 16:31:28.374034    7711 start.go:128] duration metric: took 2.3177175s to createHost
	I0327 16:31:28.374085    7711 start.go:83] releasing machines lock for "ha-772000", held for 2.317838833s
	W0327 16:31:28.374136    7711 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:31:28.385146    7711 out.go:177] * Deleting "ha-772000" in qemu2 ...
	W0327 16:31:28.411986    7711 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:31:28.412016    7711 start.go:728] Will try again in 5 seconds ...
	I0327 16:31:33.414096    7711 start.go:360] acquireMachinesLock for ha-772000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:31:33.414553    7711 start.go:364] duration metric: took 308.583µs to acquireMachinesLock for "ha-772000"
	I0327 16:31:33.414698    7711 start.go:93] Provisioning new machine with config: &{Name:ha-772000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.29.3 ClusterName:ha-772000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 16:31:33.414987    7711 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 16:31:33.424531    7711 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 16:31:33.474075    7711 start.go:159] libmachine.API.Create for "ha-772000" (driver="qemu2")
	I0327 16:31:33.474125    7711 client.go:168] LocalClient.Create starting
	I0327 16:31:33.474235    7711 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem
	I0327 16:31:33.474305    7711 main.go:141] libmachine: Decoding PEM data...
	I0327 16:31:33.474326    7711 main.go:141] libmachine: Parsing certificate...
	I0327 16:31:33.474391    7711 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem
	I0327 16:31:33.474434    7711 main.go:141] libmachine: Decoding PEM data...
	I0327 16:31:33.474449    7711 main.go:141] libmachine: Parsing certificate...
	I0327 16:31:33.474958    7711 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18485-6511/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0327 16:31:33.619715    7711 main.go:141] libmachine: Creating SSH key...
	I0327 16:31:33.727317    7711 main.go:141] libmachine: Creating Disk image...
	I0327 16:31:33.727322    7711 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 16:31:33.727766    7711 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/ha-772000/disk.qcow2.raw /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/ha-772000/disk.qcow2
	I0327 16:31:33.741247    7711 main.go:141] libmachine: STDOUT: 
	I0327 16:31:33.741269    7711 main.go:141] libmachine: STDERR: 
	I0327 16:31:33.741329    7711 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/ha-772000/disk.qcow2 +20000M
	I0327 16:31:33.752068    7711 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 16:31:33.752083    7711 main.go:141] libmachine: STDERR: 
	I0327 16:31:33.752095    7711 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/ha-772000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/ha-772000/disk.qcow2
	I0327 16:31:33.752101    7711 main.go:141] libmachine: Starting QEMU VM...
	I0327 16:31:33.752155    7711 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/ha-772000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/ha-772000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/ha-772000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:5c:d0:48:85:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/ha-772000/disk.qcow2
	I0327 16:31:33.753859    7711 main.go:141] libmachine: STDOUT: 
	I0327 16:31:33.753875    7711 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:31:33.753888    7711 client.go:171] duration metric: took 279.766125ms to LocalClient.Create
	I0327 16:31:35.756047    7711 start.go:128] duration metric: took 2.341089458s to createHost
	I0327 16:31:35.756147    7711 start.go:83] releasing machines lock for "ha-772000", held for 2.341634542s
	W0327 16:31:35.756612    7711 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-772000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-772000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:31:35.771283    7711 out.go:177] 
	W0327 16:31:35.774444    7711 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 16:31:35.774487    7711 out.go:239] * 
	* 
	W0327 16:31:35.776924    7711 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 16:31:35.789234    7711 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-772000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-772000 -n ha-772000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-772000 -n ha-772000: exit status 7 (70.689958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-772000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (9.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (110.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-772000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-772000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (61.787542ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-772000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-772000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-772000 -- rollout status deployment/busybox: exit status 1 (59.36225ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-772000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-772000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-772000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (59.188916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-772000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-772000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-772000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.391583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-772000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-772000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-772000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.371833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-772000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-772000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-772000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.614958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-772000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-772000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-772000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.249875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-772000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-772000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-772000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.984083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-772000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-772000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-772000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.222625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-772000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-772000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-772000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.682917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-772000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-772000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-772000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.414875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-772000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-772000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-772000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.587209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-772000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-772000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-772000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.851625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-772000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-772000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-772000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (59.138916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-772000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-772000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-772000 -- exec  -- nslookup kubernetes.io: exit status 1 (58.891458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-772000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-772000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-772000 -- exec  -- nslookup kubernetes.default: exit status 1 (59.286041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-772000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-772000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-772000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (58.761375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-772000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-772000 -n ha-772000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-772000 -n ha-772000: exit status 7 (32.496875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-772000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (110.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-772000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-772000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (58.645125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-772000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-772000 -n ha-772000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-772000 -n ha-772000: exit status 7 (32.043667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-772000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-772000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-772000 -v=7 --alsologtostderr: exit status 83 (43.360667ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-772000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-772000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:33:26.000811    7813 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:33:26.001154    7813 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:33:26.001158    7813 out.go:304] Setting ErrFile to fd 2...
	I0327 16:33:26.001160    7813 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:33:26.001301    7813 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:33:26.001528    7813 mustload.go:65] Loading cluster: ha-772000
	I0327 16:33:26.001725    7813 config.go:182] Loaded profile config "ha-772000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:33:26.005350    7813 out.go:177] * The control-plane node ha-772000 host is not running: state=Stopped
	I0327 16:33:26.009251    7813 out.go:177]   To start a cluster, run: "minikube start -p ha-772000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-772000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-772000 -n ha-772000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-772000 -n ha-772000: exit status 7 (32.159375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-772000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-772000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-772000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (27.460458ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-772000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-772000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-772000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-772000 -n ha-772000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-772000 -n ha-772000: exit status 7 (32.306125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-772000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-772000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-772000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-772000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-772000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-772000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-772000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-772000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-772000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-772000 -n ha-772000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-772000 -n ha-772000: exit status 7 (31.834667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-772000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-772000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-772000 status --output json -v=7 --alsologtostderr: exit status 7 (31.606167ms)

                                                
                                                
-- stdout --
	{"Name":"ha-772000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:33:26.241279    7826 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:33:26.241430    7826 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:33:26.241433    7826 out.go:304] Setting ErrFile to fd 2...
	I0327 16:33:26.241435    7826 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:33:26.241571    7826 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:33:26.241692    7826 out.go:298] Setting JSON to true
	I0327 16:33:26.241703    7826 mustload.go:65] Loading cluster: ha-772000
	I0327 16:33:26.241757    7826 notify.go:220] Checking for updates...
	I0327 16:33:26.241897    7826 config.go:182] Loaded profile config "ha-772000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:33:26.241903    7826 status.go:255] checking status of ha-772000 ...
	I0327 16:33:26.242106    7826 status.go:330] ha-772000 host status = "Stopped" (err=<nil>)
	I0327 16:33:26.242109    7826 status.go:343] host is not running, skipping remaining checks
	I0327 16:33:26.242112    7826 status.go:257] ha-772000 status: &{Name:ha-772000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:333: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-772000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-772000 -n ha-772000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-772000 -n ha-772000: exit status 7 (31.680208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-772000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-772000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-772000 node stop m02 -v=7 --alsologtostderr: exit status 85 (52.878334ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:33:26.305708    7830 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:33:26.306118    7830 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:33:26.306122    7830 out.go:304] Setting ErrFile to fd 2...
	I0327 16:33:26.306124    7830 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:33:26.306272    7830 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:33:26.306545    7830 mustload.go:65] Loading cluster: ha-772000
	I0327 16:33:26.306741    7830 config.go:182] Loaded profile config "ha-772000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:33:26.311178    7830 out.go:177] 
	W0327 16:33:26.314266    7830 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0327 16:33:26.314271    7830 out.go:239] * 
	* 
	W0327 16:33:26.317505    7830 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 16:33:26.322277    7830 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-772000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-772000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-772000 status -v=7 --alsologtostderr: exit status 7 (31.945291ms)

                                                
                                                
-- stdout --
	ha-772000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:33:26.358483    7832 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:33:26.358626    7832 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:33:26.358629    7832 out.go:304] Setting ErrFile to fd 2...
	I0327 16:33:26.358631    7832 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:33:26.358745    7832 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:33:26.358876    7832 out.go:298] Setting JSON to false
	I0327 16:33:26.358887    7832 mustload.go:65] Loading cluster: ha-772000
	I0327 16:33:26.358946    7832 notify.go:220] Checking for updates...
	I0327 16:33:26.359110    7832 config.go:182] Loaded profile config "ha-772000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:33:26.359115    7832 status.go:255] checking status of ha-772000 ...
	I0327 16:33:26.359310    7832 status.go:330] ha-772000 host status = "Stopped" (err=<nil>)
	I0327 16:33:26.359313    7832 status.go:343] host is not running, skipping remaining checks
	I0327 16:33:26.359316    7832 status.go:257] ha-772000 status: &{Name:ha-772000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-772000 status -v=7 --alsologtostderr": ha-772000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-772000 status -v=7 --alsologtostderr": ha-772000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-772000 status -v=7 --alsologtostderr": ha-772000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-772000 status -v=7 --alsologtostderr": ha-772000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-772000 -n ha-772000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-772000 -n ha-772000: exit status 7 (32.370916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-772000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-772000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-772000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-772000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-772000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-772000 -n ha-772000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-772000 -n ha-772000: exit status 7 (31.829125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-772000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (54.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-772000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-772000 node start m02 -v=7 --alsologtostderr: exit status 85 (48.55975ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:33:26.529513    7842 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:33:26.529902    7842 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:33:26.529905    7842 out.go:304] Setting ErrFile to fd 2...
	I0327 16:33:26.529908    7842 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:33:26.530034    7842 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:33:26.530253    7842 mustload.go:65] Loading cluster: ha-772000
	I0327 16:33:26.530436    7842 config.go:182] Loaded profile config "ha-772000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:33:26.534830    7842 out.go:177] 
	W0327 16:33:26.537858    7842 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0327 16:33:26.537863    7842 out.go:239] * 
	* 
	W0327 16:33:26.539641    7842 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 16:33:26.542723    7842 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0327 16:33:26.529513    7842 out.go:291] Setting OutFile to fd 1 ...
I0327 16:33:26.529902    7842 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 16:33:26.529905    7842 out.go:304] Setting ErrFile to fd 2...
I0327 16:33:26.529908    7842 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 16:33:26.530034    7842 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
I0327 16:33:26.530253    7842 mustload.go:65] Loading cluster: ha-772000
I0327 16:33:26.530436    7842 config.go:182] Loaded profile config "ha-772000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0327 16:33:26.534830    7842 out.go:177] 
W0327 16:33:26.537858    7842 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0327 16:33:26.537863    7842 out.go:239] * 
* 
W0327 16:33:26.539641    7842 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0327 16:33:26.542723    7842 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-772000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-772000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-772000 status -v=7 --alsologtostderr: exit status 7 (32.190208ms)

                                                
                                                
-- stdout --
	ha-772000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:33:26.578331    7844 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:33:26.578469    7844 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:33:26.578477    7844 out.go:304] Setting ErrFile to fd 2...
	I0327 16:33:26.578481    7844 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:33:26.578602    7844 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:33:26.578721    7844 out.go:298] Setting JSON to false
	I0327 16:33:26.578732    7844 mustload.go:65] Loading cluster: ha-772000
	I0327 16:33:26.578795    7844 notify.go:220] Checking for updates...
	I0327 16:33:26.578942    7844 config.go:182] Loaded profile config "ha-772000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:33:26.578947    7844 status.go:255] checking status of ha-772000 ...
	I0327 16:33:26.579145    7844 status.go:330] ha-772000 host status = "Stopped" (err=<nil>)
	I0327 16:33:26.579149    7844 status.go:343] host is not running, skipping remaining checks
	I0327 16:33:26.579151    7844 status.go:257] ha-772000 status: &{Name:ha-772000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-772000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-772000 status -v=7 --alsologtostderr: exit status 7 (79.192583ms)

                                                
                                                
-- stdout --
	ha-772000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:33:27.913685    7846 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:33:27.913841    7846 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:33:27.913846    7846 out.go:304] Setting ErrFile to fd 2...
	I0327 16:33:27.913849    7846 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:33:27.914021    7846 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:33:27.914190    7846 out.go:298] Setting JSON to false
	I0327 16:33:27.914204    7846 mustload.go:65] Loading cluster: ha-772000
	I0327 16:33:27.914243    7846 notify.go:220] Checking for updates...
	I0327 16:33:27.914452    7846 config.go:182] Loaded profile config "ha-772000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:33:27.914459    7846 status.go:255] checking status of ha-772000 ...
	I0327 16:33:27.914718    7846 status.go:330] ha-772000 host status = "Stopped" (err=<nil>)
	I0327 16:33:27.914722    7846 status.go:343] host is not running, skipping remaining checks
	I0327 16:33:27.914725    7846 status.go:257] ha-772000 status: &{Name:ha-772000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-772000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-772000 status -v=7 --alsologtostderr: exit status 7 (78.399333ms)

                                                
                                                
-- stdout --
	ha-772000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:33:29.512394    7848 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:33:29.512561    7848 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:33:29.512565    7848 out.go:304] Setting ErrFile to fd 2...
	I0327 16:33:29.512568    7848 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:33:29.512772    7848 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:33:29.512942    7848 out.go:298] Setting JSON to false
	I0327 16:33:29.512958    7848 mustload.go:65] Loading cluster: ha-772000
	I0327 16:33:29.512991    7848 notify.go:220] Checking for updates...
	I0327 16:33:29.513222    7848 config.go:182] Loaded profile config "ha-772000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:33:29.513229    7848 status.go:255] checking status of ha-772000 ...
	I0327 16:33:29.513493    7848 status.go:330] ha-772000 host status = "Stopped" (err=<nil>)
	I0327 16:33:29.513498    7848 status.go:343] host is not running, skipping remaining checks
	I0327 16:33:29.513501    7848 status.go:257] ha-772000 status: &{Name:ha-772000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-772000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-772000 status -v=7 --alsologtostderr: exit status 7 (76.294125ms)

                                                
                                                
-- stdout --
	ha-772000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:33:31.519707    7850 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:33:31.519841    7850 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:33:31.519845    7850 out.go:304] Setting ErrFile to fd 2...
	I0327 16:33:31.519849    7850 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:33:31.520014    7850 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:33:31.520168    7850 out.go:298] Setting JSON to false
	I0327 16:33:31.520182    7850 mustload.go:65] Loading cluster: ha-772000
	I0327 16:33:31.520227    7850 notify.go:220] Checking for updates...
	I0327 16:33:31.520457    7850 config.go:182] Loaded profile config "ha-772000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:33:31.520464    7850 status.go:255] checking status of ha-772000 ...
	I0327 16:33:31.520707    7850 status.go:330] ha-772000 host status = "Stopped" (err=<nil>)
	I0327 16:33:31.520712    7850 status.go:343] host is not running, skipping remaining checks
	I0327 16:33:31.520714    7850 status.go:257] ha-772000 status: &{Name:ha-772000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-772000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-772000 status -v=7 --alsologtostderr: exit status 7 (76.852458ms)

                                                
                                                
-- stdout --
	ha-772000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:33:33.736653    7852 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:33:33.737048    7852 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:33:33.737054    7852 out.go:304] Setting ErrFile to fd 2...
	I0327 16:33:33.737057    7852 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:33:33.737305    7852 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:33:33.737529    7852 out.go:298] Setting JSON to false
	I0327 16:33:33.737545    7852 mustload.go:65] Loading cluster: ha-772000
	I0327 16:33:33.737730    7852 notify.go:220] Checking for updates...
	I0327 16:33:33.738181    7852 config.go:182] Loaded profile config "ha-772000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:33:33.738201    7852 status.go:255] checking status of ha-772000 ...
	I0327 16:33:33.738458    7852 status.go:330] ha-772000 host status = "Stopped" (err=<nil>)
	I0327 16:33:33.738463    7852 status.go:343] host is not running, skipping remaining checks
	I0327 16:33:33.738466    7852 status.go:257] ha-772000 status: &{Name:ha-772000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-772000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-772000 status -v=7 --alsologtostderr: exit status 7 (75.75725ms)

                                                
                                                
-- stdout --
	ha-772000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:33:41.126520    7857 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:33:41.126717    7857 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:33:41.126722    7857 out.go:304] Setting ErrFile to fd 2...
	I0327 16:33:41.126725    7857 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:33:41.126894    7857 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:33:41.127057    7857 out.go:298] Setting JSON to false
	I0327 16:33:41.127072    7857 mustload.go:65] Loading cluster: ha-772000
	I0327 16:33:41.127098    7857 notify.go:220] Checking for updates...
	I0327 16:33:41.127383    7857 config.go:182] Loaded profile config "ha-772000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:33:41.127390    7857 status.go:255] checking status of ha-772000 ...
	I0327 16:33:41.127670    7857 status.go:330] ha-772000 host status = "Stopped" (err=<nil>)
	I0327 16:33:41.127675    7857 status.go:343] host is not running, skipping remaining checks
	I0327 16:33:41.127678    7857 status.go:257] ha-772000 status: &{Name:ha-772000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-772000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-772000 status -v=7 --alsologtostderr: exit status 7 (76.768209ms)

                                                
                                                
-- stdout --
	ha-772000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:33:51.044738    7864 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:33:51.044893    7864 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:33:51.044898    7864 out.go:304] Setting ErrFile to fd 2...
	I0327 16:33:51.044901    7864 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:33:51.045061    7864 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:33:51.045225    7864 out.go:298] Setting JSON to false
	I0327 16:33:51.045243    7864 mustload.go:65] Loading cluster: ha-772000
	I0327 16:33:51.045274    7864 notify.go:220] Checking for updates...
	I0327 16:33:51.045490    7864 config.go:182] Loaded profile config "ha-772000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:33:51.045497    7864 status.go:255] checking status of ha-772000 ...
	I0327 16:33:51.045763    7864 status.go:330] ha-772000 host status = "Stopped" (err=<nil>)
	I0327 16:33:51.045768    7864 status.go:343] host is not running, skipping remaining checks
	I0327 16:33:51.045771    7864 status.go:257] ha-772000 status: &{Name:ha-772000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-772000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-772000 status -v=7 --alsologtostderr: exit status 7 (78.12125ms)

                                                
                                                
-- stdout --
	ha-772000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:34:05.810610    7867 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:34:05.810795    7867 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:34:05.810799    7867 out.go:304] Setting ErrFile to fd 2...
	I0327 16:34:05.810802    7867 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:34:05.810967    7867 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:34:05.811144    7867 out.go:298] Setting JSON to false
	I0327 16:34:05.811161    7867 mustload.go:65] Loading cluster: ha-772000
	I0327 16:34:05.811203    7867 notify.go:220] Checking for updates...
	I0327 16:34:05.811442    7867 config.go:182] Loaded profile config "ha-772000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:34:05.811449    7867 status.go:255] checking status of ha-772000 ...
	I0327 16:34:05.811721    7867 status.go:330] ha-772000 host status = "Stopped" (err=<nil>)
	I0327 16:34:05.811725    7867 status.go:343] host is not running, skipping remaining checks
	I0327 16:34:05.811728    7867 status.go:257] ha-772000 status: &{Name:ha-772000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-772000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-772000 status -v=7 --alsologtostderr: exit status 7 (75.817875ms)

                                                
                                                
-- stdout --
	ha-772000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:34:21.073050    7871 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:34:21.073241    7871 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:34:21.073245    7871 out.go:304] Setting ErrFile to fd 2...
	I0327 16:34:21.073249    7871 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:34:21.073401    7871 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:34:21.073561    7871 out.go:298] Setting JSON to false
	I0327 16:34:21.073575    7871 mustload.go:65] Loading cluster: ha-772000
	I0327 16:34:21.073617    7871 notify.go:220] Checking for updates...
	I0327 16:34:21.073838    7871 config.go:182] Loaded profile config "ha-772000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:34:21.073847    7871 status.go:255] checking status of ha-772000 ...
	I0327 16:34:21.074134    7871 status.go:330] ha-772000 host status = "Stopped" (err=<nil>)
	I0327 16:34:21.074139    7871 status.go:343] host is not running, skipping remaining checks
	I0327 16:34:21.074141    7871 status.go:257] ha-772000 status: &{Name:ha-772000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-772000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-772000 -n ha-772000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-772000 -n ha-772000: exit status 7 (34.205459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-772000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (54.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-772000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-772000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-772000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-772000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-772000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-772000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-772000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-772000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-772000 -n ha-772000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-772000 -n ha-772000: exit status 7 (32.160125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-772000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (7.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-772000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-772000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-772000 -v=7 --alsologtostderr: (1.965297833s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-772000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-772000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.228981542s)

                                                
                                                
-- stdout --
	* [ha-772000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-772000" primary control-plane node in "ha-772000" cluster
	* Restarting existing qemu2 VM for "ha-772000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-772000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:34:23.280700    7893 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:34:23.280852    7893 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:34:23.280857    7893 out.go:304] Setting ErrFile to fd 2...
	I0327 16:34:23.280860    7893 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:34:23.281028    7893 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:34:23.282259    7893 out.go:298] Setting JSON to false
	I0327 16:34:23.301049    7893 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5634,"bootTime":1711576829,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 16:34:23.301110    7893 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 16:34:23.305370    7893 out.go:177] * [ha-772000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 16:34:23.312229    7893 out.go:177]   - MINIKUBE_LOCATION=18485
	I0327 16:34:23.312268    7893 notify.go:220] Checking for updates...
	I0327 16:34:23.317663    7893 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:34:23.325186    7893 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 16:34:23.328244    7893 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 16:34:23.329780    7893 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	I0327 16:34:23.333182    7893 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 16:34:23.336571    7893 config.go:182] Loaded profile config "ha-772000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:34:23.336632    7893 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 16:34:23.341066    7893 out.go:177] * Using the qemu2 driver based on existing profile
	I0327 16:34:23.348215    7893 start.go:297] selected driver: qemu2
	I0327 16:34:23.348222    7893 start.go:901] validating driver "qemu2" against &{Name:ha-772000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.29.3 ClusterName:ha-772000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:34:23.348284    7893 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 16:34:23.350684    7893 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 16:34:23.350730    7893 cni.go:84] Creating CNI manager for ""
	I0327 16:34:23.350735    7893 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0327 16:34:23.350781    7893 start.go:340] cluster config:
	{Name:ha-772000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-772000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:34:23.355512    7893 iso.go:125] acquiring lock: {Name:mk26a7f41e004242919e6d5076ec0f9645f820ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:34:23.363191    7893 out.go:177] * Starting "ha-772000" primary control-plane node in "ha-772000" cluster
	I0327 16:34:23.367260    7893 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 16:34:23.367279    7893 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 16:34:23.367291    7893 cache.go:56] Caching tarball of preloaded images
	I0327 16:34:23.367348    7893 preload.go:173] Found /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 16:34:23.367353    7893 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 16:34:23.367425    7893 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/ha-772000/config.json ...
	I0327 16:34:23.367899    7893 start.go:360] acquireMachinesLock for ha-772000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:34:23.367939    7893 start.go:364] duration metric: took 32.542µs to acquireMachinesLock for "ha-772000"
	I0327 16:34:23.367950    7893 start.go:96] Skipping create...Using existing machine configuration
	I0327 16:34:23.367957    7893 fix.go:54] fixHost starting: 
	I0327 16:34:23.368092    7893 fix.go:112] recreateIfNeeded on ha-772000: state=Stopped err=<nil>
	W0327 16:34:23.368102    7893 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 16:34:23.376186    7893 out.go:177] * Restarting existing qemu2 VM for "ha-772000" ...
	I0327 16:34:23.380196    7893 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/ha-772000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/ha-772000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/ha-772000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:5c:d0:48:85:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/ha-772000/disk.qcow2
	I0327 16:34:23.382388    7893 main.go:141] libmachine: STDOUT: 
	I0327 16:34:23.382409    7893 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:34:23.382440    7893 fix.go:56] duration metric: took 14.483917ms for fixHost
	I0327 16:34:23.382446    7893 start.go:83] releasing machines lock for "ha-772000", held for 14.502958ms
	W0327 16:34:23.382453    7893 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 16:34:23.382493    7893 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:34:23.382499    7893 start.go:728] Will try again in 5 seconds ...
	I0327 16:34:28.384544    7893 start.go:360] acquireMachinesLock for ha-772000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:34:28.384907    7893 start.go:364] duration metric: took 262.375µs to acquireMachinesLock for "ha-772000"
	I0327 16:34:28.385045    7893 start.go:96] Skipping create...Using existing machine configuration
	I0327 16:34:28.385068    7893 fix.go:54] fixHost starting: 
	I0327 16:34:28.385728    7893 fix.go:112] recreateIfNeeded on ha-772000: state=Stopped err=<nil>
	W0327 16:34:28.385753    7893 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 16:34:28.391207    7893 out.go:177] * Restarting existing qemu2 VM for "ha-772000" ...
	I0327 16:34:28.395347    7893 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/ha-772000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/ha-772000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/ha-772000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:5c:d0:48:85:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/ha-772000/disk.qcow2
	I0327 16:34:28.405380    7893 main.go:141] libmachine: STDOUT: 
	I0327 16:34:28.405464    7893 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:34:28.405578    7893 fix.go:56] duration metric: took 20.511ms for fixHost
	I0327 16:34:28.405602    7893 start.go:83] releasing machines lock for "ha-772000", held for 20.667333ms
	W0327 16:34:28.405830    7893 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-772000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-772000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:34:28.415143    7893 out.go:177] 
	W0327 16:34:28.419178    7893 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 16:34:28.419279    7893 out.go:239] * 
	* 
	W0327 16:34:28.421596    7893 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 16:34:28.429105    7893 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-772000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-772000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-772000 -n ha-772000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-772000 -n ha-772000: exit status 7 (34.348917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-772000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (7.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-772000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-772000 node delete m03 -v=7 --alsologtostderr: exit status 83 (42.232417ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-772000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-772000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:34:28.578131    7908 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:34:28.578528    7908 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:34:28.578532    7908 out.go:304] Setting ErrFile to fd 2...
	I0327 16:34:28.578534    7908 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:34:28.578706    7908 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:34:28.578914    7908 mustload.go:65] Loading cluster: ha-772000
	I0327 16:34:28.579096    7908 config.go:182] Loaded profile config "ha-772000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:34:28.583199    7908 out.go:177] * The control-plane node ha-772000 host is not running: state=Stopped
	I0327 16:34:28.586381    7908 out.go:177]   To start a cluster, run: "minikube start -p ha-772000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-772000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-772000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-772000 status -v=7 --alsologtostderr: exit status 7 (32.238083ms)

                                                
                                                
-- stdout --
	ha-772000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:34:28.620851    7910 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:34:28.620994    7910 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:34:28.620997    7910 out.go:304] Setting ErrFile to fd 2...
	I0327 16:34:28.620999    7910 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:34:28.621137    7910 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:34:28.621268    7910 out.go:298] Setting JSON to false
	I0327 16:34:28.621279    7910 mustload.go:65] Loading cluster: ha-772000
	I0327 16:34:28.621333    7910 notify.go:220] Checking for updates...
	I0327 16:34:28.621517    7910 config.go:182] Loaded profile config "ha-772000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:34:28.621523    7910 status.go:255] checking status of ha-772000 ...
	I0327 16:34:28.621709    7910 status.go:330] ha-772000 host status = "Stopped" (err=<nil>)
	I0327 16:34:28.621712    7910 status.go:343] host is not running, skipping remaining checks
	I0327 16:34:28.621714    7910 status.go:257] ha-772000 status: &{Name:ha-772000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-772000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-772000 -n ha-772000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-772000 -n ha-772000: exit status 7 (32.271333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-772000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-772000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-772000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-772000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-772000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-772000 -n ha-772000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-772000 -n ha-772000: exit status 7 (31.498333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-772000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (3.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-772000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-772000 stop -v=7 --alsologtostderr: (3.435852584s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-772000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-772000 status -v=7 --alsologtostderr: exit status 7 (65.368583ms)

                                                
                                                
-- stdout --
	ha-772000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:34:32.260072    7940 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:34:32.260238    7940 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:34:32.260242    7940 out.go:304] Setting ErrFile to fd 2...
	I0327 16:34:32.260245    7940 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:34:32.260402    7940 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:34:32.260538    7940 out.go:298] Setting JSON to false
	I0327 16:34:32.260554    7940 mustload.go:65] Loading cluster: ha-772000
	I0327 16:34:32.260578    7940 notify.go:220] Checking for updates...
	I0327 16:34:32.260777    7940 config.go:182] Loaded profile config "ha-772000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:34:32.260784    7940 status.go:255] checking status of ha-772000 ...
	I0327 16:34:32.261016    7940 status.go:330] ha-772000 host status = "Stopped" (err=<nil>)
	I0327 16:34:32.261020    7940 status.go:343] host is not running, skipping remaining checks
	I0327 16:34:32.261023    7940 status.go:257] ha-772000 status: &{Name:ha-772000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-772000 status -v=7 --alsologtostderr": ha-772000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-772000 status -v=7 --alsologtostderr": ha-772000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-772000 status -v=7 --alsologtostderr": ha-772000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-772000 -n ha-772000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-772000 -n ha-772000: exit status 7 (34.025375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-772000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (3.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-772000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-772000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.185966959s)

                                                
                                                
-- stdout --
	* [ha-772000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-772000" primary control-plane node in "ha-772000" cluster
	* Restarting existing qemu2 VM for "ha-772000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-772000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:34:32.326225    7944 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:34:32.326357    7944 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:34:32.326360    7944 out.go:304] Setting ErrFile to fd 2...
	I0327 16:34:32.326363    7944 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:34:32.326500    7944 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:34:32.327522    7944 out.go:298] Setting JSON to false
	I0327 16:34:32.343467    7944 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5643,"bootTime":1711576829,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 16:34:32.343536    7944 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 16:34:32.348079    7944 out.go:177] * [ha-772000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 16:34:32.354940    7944 out.go:177]   - MINIKUBE_LOCATION=18485
	I0327 16:34:32.355004    7944 notify.go:220] Checking for updates...
	I0327 16:34:32.359013    7944 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:34:32.360567    7944 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 16:34:32.363983    7944 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 16:34:32.367010    7944 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	I0327 16:34:32.368472    7944 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 16:34:32.372267    7944 config.go:182] Loaded profile config "ha-772000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:34:32.372508    7944 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 16:34:32.376973    7944 out.go:177] * Using the qemu2 driver based on existing profile
	I0327 16:34:32.382976    7944 start.go:297] selected driver: qemu2
	I0327 16:34:32.382983    7944 start.go:901] validating driver "qemu2" against &{Name:ha-772000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.29.3 ClusterName:ha-772000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:34:32.383049    7944 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 16:34:32.385163    7944 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 16:34:32.385209    7944 cni.go:84] Creating CNI manager for ""
	I0327 16:34:32.385215    7944 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0327 16:34:32.385264    7944 start.go:340] cluster config:
	{Name:ha-772000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-772000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:34:32.389379    7944 iso.go:125] acquiring lock: {Name:mk26a7f41e004242919e6d5076ec0f9645f820ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:34:32.396979    7944 out.go:177] * Starting "ha-772000" primary control-plane node in "ha-772000" cluster
	I0327 16:34:32.401028    7944 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 16:34:32.401045    7944 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 16:34:32.401052    7944 cache.go:56] Caching tarball of preloaded images
	I0327 16:34:32.401105    7944 preload.go:173] Found /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 16:34:32.401111    7944 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 16:34:32.401174    7944 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/ha-772000/config.json ...
	I0327 16:34:32.401612    7944 start.go:360] acquireMachinesLock for ha-772000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:34:32.401635    7944 start.go:364] duration metric: took 18.375µs to acquireMachinesLock for "ha-772000"
	I0327 16:34:32.401644    7944 start.go:96] Skipping create...Using existing machine configuration
	I0327 16:34:32.401650    7944 fix.go:54] fixHost starting: 
	I0327 16:34:32.401757    7944 fix.go:112] recreateIfNeeded on ha-772000: state=Stopped err=<nil>
	W0327 16:34:32.401766    7944 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 16:34:32.409960    7944 out.go:177] * Restarting existing qemu2 VM for "ha-772000" ...
	I0327 16:34:32.414053    7944 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/ha-772000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/ha-772000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/ha-772000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:5c:d0:48:85:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/ha-772000/disk.qcow2
	I0327 16:34:32.415999    7944 main.go:141] libmachine: STDOUT: 
	I0327 16:34:32.416021    7944 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:34:32.416057    7944 fix.go:56] duration metric: took 14.407291ms for fixHost
	I0327 16:34:32.416062    7944 start.go:83] releasing machines lock for "ha-772000", held for 14.423667ms
	W0327 16:34:32.416068    7944 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 16:34:32.416104    7944 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:34:32.416109    7944 start.go:728] Will try again in 5 seconds ...
	I0327 16:34:37.418156    7944 start.go:360] acquireMachinesLock for ha-772000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:34:37.418484    7944 start.go:364] duration metric: took 234.584µs to acquireMachinesLock for "ha-772000"
	I0327 16:34:37.418635    7944 start.go:96] Skipping create...Using existing machine configuration
	I0327 16:34:37.418654    7944 fix.go:54] fixHost starting: 
	I0327 16:34:37.419349    7944 fix.go:112] recreateIfNeeded on ha-772000: state=Stopped err=<nil>
	W0327 16:34:37.419380    7944 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 16:34:37.429821    7944 out.go:177] * Restarting existing qemu2 VM for "ha-772000" ...
	I0327 16:34:37.434904    7944 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/ha-772000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/ha-772000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/ha-772000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:5c:d0:48:85:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/ha-772000/disk.qcow2
	I0327 16:34:37.444385    7944 main.go:141] libmachine: STDOUT: 
	I0327 16:34:37.444504    7944 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:34:37.444593    7944 fix.go:56] duration metric: took 25.939417ms for fixHost
	I0327 16:34:37.444620    7944 start.go:83] releasing machines lock for "ha-772000", held for 26.110125ms
	W0327 16:34:37.444819    7944 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-772000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-772000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:34:37.452888    7944 out.go:177] 
	W0327 16:34:37.456866    7944 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 16:34:37.456910    7944 out.go:239] * 
	* 
	W0327 16:34:37.459500    7944 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 16:34:37.467825    7944 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-772000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-772000 -n ha-772000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-772000 -n ha-772000: exit status 7 (69.015541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-772000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-772000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-772000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-772000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-772000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-772000 -n ha-772000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-772000 -n ha-772000: exit status 7 (32.204417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-772000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-772000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-772000 --control-plane -v=7 --alsologtostderr: exit status 83 (43.196792ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-772000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-772000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:34:37.692561    7963 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:34:37.692725    7963 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:34:37.692728    7963 out.go:304] Setting ErrFile to fd 2...
	I0327 16:34:37.692730    7963 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:34:37.692862    7963 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:34:37.693107    7963 mustload.go:65] Loading cluster: ha-772000
	I0327 16:34:37.693291    7963 config.go:182] Loaded profile config "ha-772000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:34:37.697672    7963 out.go:177] * The control-plane node ha-772000 host is not running: state=Stopped
	I0327 16:34:37.701676    7963 out.go:177]   To start a cluster, run: "minikube start -p ha-772000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-772000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-772000 -n ha-772000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-772000 -n ha-772000: exit status 7 (31.952125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-772000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-772000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-772000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-772000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-772000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-772000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-772000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-772000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-772000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-772000 -n ha-772000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-772000 -n ha-772000: exit status 7 (32.338417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-772000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.10s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.96s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-062000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-062000 --driver=qemu2 : exit status 80 (9.887197s)

                                                
                                                
-- stdout --
	* [image-062000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-062000" primary control-plane node in "image-062000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-062000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-062000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-062000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-062000 -n image-062000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-062000 -n image-062000: exit status 7 (71.748125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-062000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.96s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.71s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-483000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-483000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.709712458s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a80d28ac-a649-4ab7-b258-a71610f3ff85","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-483000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9db04d7e-77e8-4de0-a2af-c7fab95c81a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18485"}}
	{"specversion":"1.0","id":"df57caa8-8173-40df-971c-3fe9113582fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig"}}
	{"specversion":"1.0","id":"25d8a26f-9bde-4630-9920-b33c340b758a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"b5de3d1f-f359-4509-ac32-93026c142aed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f0204494-69cb-4a53-93b1-c007fd4390dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube"}}
	{"specversion":"1.0","id":"5ad8cb09-8ca8-4733-823b-e444a79262b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"89471356-5f69-4da7-901b-fab3cb33436f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"68410ded-04a1-4ea3-a244-71b7c8e1be0f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"7995a6d0-ca75-4837-8a7a-b35f76011604","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-483000\" primary control-plane node in \"json-output-483000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"fd43f464-a420-4c62-beb2-3cffafd1c0e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"15bfa539-01f3-47d1-9f69-832b01a0d04b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-483000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"bc761335-05c5-44e8-b869-fc1b1fb2a896","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"eb89f893-d5c0-4223-9ba0-db341848f99f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"7570d155-d559-4da6-bc8b-8563df45fe90","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-483000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"1b6242ad-c00d-47b6-9a4b-3af53f814e25","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"a548dbe6-fc1d-4f62-b8dc-b5be630caffa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-483000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.09s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-483000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-483000 --output=json --user=testUser: exit status 83 (84.861083ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"452a6bb4-c9d9-4b22-a913-3ad5f958db54","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-483000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"325382f5-d6cf-4015-a58e-d23f8e07bc54","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-483000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-483000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.09s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-483000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-483000 --output=json --user=testUser: exit status 83 (45.17925ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-483000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-483000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-483000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-483000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.25s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-497000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-497000 --driver=qemu2 : exit status 80 (9.808579416s)

                                                
                                                
-- stdout --
	* [first-497000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-497000" primary control-plane node in "first-497000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-497000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-497000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-497000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-03-27 16:35:11.671398 -0700 PDT m=+533.146894043
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-499000 -n second-499000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-499000 -n second-499000: exit status 85 (82.261834ms)

                                                
                                                
-- stdout --
	* Profile "second-499000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-499000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-499000" host is not running, skipping log retrieval (state="* Profile \"second-499000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-499000\"")
helpers_test.go:175: Cleaning up "second-499000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-499000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-03-27 16:35:11.983759 -0700 PDT m=+533.459264626
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-497000 -n first-497000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-497000 -n first-497000: exit status 7 (32.106166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-497000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-497000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-497000
--- FAIL: TestMinikubeProfile (10.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (11.13s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-709000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-709000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (11.061476958s)

                                                
                                                
-- stdout --
	* [mount-start-1-709000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-709000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-709000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-709000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-709000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-709000 -n mount-start-1-709000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-709000 -n mount-start-1-709000: exit status 7 (68.974042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-709000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (11.13s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-266000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-266000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.760162458s)

                                                
                                                
-- stdout --
	* [multinode-266000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-266000" primary control-plane node in "multinode-266000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-266000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:35:23.608367    8132 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:35:23.608506    8132 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:35:23.608509    8132 out.go:304] Setting ErrFile to fd 2...
	I0327 16:35:23.608511    8132 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:35:23.608651    8132 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:35:23.609787    8132 out.go:298] Setting JSON to false
	I0327 16:35:23.625885    8132 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5694,"bootTime":1711576829,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 16:35:23.625939    8132 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 16:35:23.632300    8132 out.go:177] * [multinode-266000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 16:35:23.639241    8132 out.go:177]   - MINIKUBE_LOCATION=18485
	I0327 16:35:23.643226    8132 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:35:23.639301    8132 notify.go:220] Checking for updates...
	I0327 16:35:23.649209    8132 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 16:35:23.652216    8132 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 16:35:23.655288    8132 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	I0327 16:35:23.658228    8132 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 16:35:23.661425    8132 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 16:35:23.665241    8132 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 16:35:23.672228    8132 start.go:297] selected driver: qemu2
	I0327 16:35:23.672234    8132 start.go:901] validating driver "qemu2" against <nil>
	I0327 16:35:23.672240    8132 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 16:35:23.674476    8132 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 16:35:23.677222    8132 out.go:177] * Automatically selected the socket_vmnet network
	I0327 16:35:23.678645    8132 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 16:35:23.678678    8132 cni.go:84] Creating CNI manager for ""
	I0327 16:35:23.678681    8132 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0327 16:35:23.678690    8132 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0327 16:35:23.678718    8132 start.go:340] cluster config:
	{Name:multinode-266000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-266000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:35:23.683141    8132 iso.go:125] acquiring lock: {Name:mk26a7f41e004242919e6d5076ec0f9645f820ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:35:23.690228    8132 out.go:177] * Starting "multinode-266000" primary control-plane node in "multinode-266000" cluster
	I0327 16:35:23.694201    8132 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 16:35:23.694234    8132 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 16:35:23.694246    8132 cache.go:56] Caching tarball of preloaded images
	I0327 16:35:23.694307    8132 preload.go:173] Found /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 16:35:23.694312    8132 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 16:35:23.694546    8132 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/multinode-266000/config.json ...
	I0327 16:35:23.694561    8132 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/multinode-266000/config.json: {Name:mkdf03e56dc2e28f1916427d8e1453eee48532b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 16:35:23.694794    8132 start.go:360] acquireMachinesLock for multinode-266000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:35:23.694828    8132 start.go:364] duration metric: took 28.125µs to acquireMachinesLock for "multinode-266000"
	I0327 16:35:23.694841    8132 start.go:93] Provisioning new machine with config: &{Name:multinode-266000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.29.3 ClusterName:multinode-266000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 16:35:23.694873    8132 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 16:35:23.702298    8132 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 16:35:23.720102    8132 start.go:159] libmachine.API.Create for "multinode-266000" (driver="qemu2")
	I0327 16:35:23.720127    8132 client.go:168] LocalClient.Create starting
	I0327 16:35:23.720198    8132 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem
	I0327 16:35:23.720226    8132 main.go:141] libmachine: Decoding PEM data...
	I0327 16:35:23.720237    8132 main.go:141] libmachine: Parsing certificate...
	I0327 16:35:23.720275    8132 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem
	I0327 16:35:23.720297    8132 main.go:141] libmachine: Decoding PEM data...
	I0327 16:35:23.720305    8132 main.go:141] libmachine: Parsing certificate...
	I0327 16:35:23.720679    8132 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18485-6511/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0327 16:35:23.855591    8132 main.go:141] libmachine: Creating SSH key...
	I0327 16:35:23.889871    8132 main.go:141] libmachine: Creating Disk image...
	I0327 16:35:23.889875    8132 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 16:35:23.890017    8132 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/multinode-266000/disk.qcow2.raw /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/multinode-266000/disk.qcow2
	I0327 16:35:23.902528    8132 main.go:141] libmachine: STDOUT: 
	I0327 16:35:23.902554    8132 main.go:141] libmachine: STDERR: 
	I0327 16:35:23.902609    8132 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/multinode-266000/disk.qcow2 +20000M
	I0327 16:35:23.913126    8132 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 16:35:23.913141    8132 main.go:141] libmachine: STDERR: 
	I0327 16:35:23.913161    8132 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/multinode-266000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/multinode-266000/disk.qcow2
	I0327 16:35:23.913167    8132 main.go:141] libmachine: Starting QEMU VM...
	I0327 16:35:23.913192    8132 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/multinode-266000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/multinode-266000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/multinode-266000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:17:93:1f:7a:3c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/multinode-266000/disk.qcow2
	I0327 16:35:23.914906    8132 main.go:141] libmachine: STDOUT: 
	I0327 16:35:23.914920    8132 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:35:23.914937    8132 client.go:171] duration metric: took 194.81175ms to LocalClient.Create
	I0327 16:35:25.917086    8132 start.go:128] duration metric: took 2.22224075s to createHost
	I0327 16:35:25.917141    8132 start.go:83] releasing machines lock for "multinode-266000", held for 2.222369292s
	W0327 16:35:25.917231    8132 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:35:25.931541    8132 out.go:177] * Deleting "multinode-266000" in qemu2 ...
	W0327 16:35:25.955908    8132 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:35:25.955939    8132 start.go:728] Will try again in 5 seconds ...
	I0327 16:35:30.958034    8132 start.go:360] acquireMachinesLock for multinode-266000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:35:30.958507    8132 start.go:364] duration metric: took 342.042µs to acquireMachinesLock for "multinode-266000"
	I0327 16:35:30.958643    8132 start.go:93] Provisioning new machine with config: &{Name:multinode-266000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.29.3 ClusterName:multinode-266000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 16:35:30.958942    8132 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 16:35:30.964664    8132 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 16:35:31.014770    8132 start.go:159] libmachine.API.Create for "multinode-266000" (driver="qemu2")
	I0327 16:35:31.014811    8132 client.go:168] LocalClient.Create starting
	I0327 16:35:31.014909    8132 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem
	I0327 16:35:31.014970    8132 main.go:141] libmachine: Decoding PEM data...
	I0327 16:35:31.014986    8132 main.go:141] libmachine: Parsing certificate...
	I0327 16:35:31.015048    8132 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem
	I0327 16:35:31.015089    8132 main.go:141] libmachine: Decoding PEM data...
	I0327 16:35:31.015103    8132 main.go:141] libmachine: Parsing certificate...
	I0327 16:35:31.015644    8132 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18485-6511/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0327 16:35:31.163015    8132 main.go:141] libmachine: Creating SSH key...
	I0327 16:35:31.270761    8132 main.go:141] libmachine: Creating Disk image...
	I0327 16:35:31.270766    8132 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 16:35:31.270936    8132 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/multinode-266000/disk.qcow2.raw /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/multinode-266000/disk.qcow2
	I0327 16:35:31.283089    8132 main.go:141] libmachine: STDOUT: 
	I0327 16:35:31.283111    8132 main.go:141] libmachine: STDERR: 
	I0327 16:35:31.283162    8132 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/multinode-266000/disk.qcow2 +20000M
	I0327 16:35:31.293746    8132 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 16:35:31.293762    8132 main.go:141] libmachine: STDERR: 
	I0327 16:35:31.293773    8132 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/multinode-266000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/multinode-266000/disk.qcow2
	I0327 16:35:31.293780    8132 main.go:141] libmachine: Starting QEMU VM...
	I0327 16:35:31.293824    8132 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/multinode-266000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/multinode-266000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/multinode-266000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:be:4d:47:a9:5c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/multinode-266000/disk.qcow2
	I0327 16:35:31.295502    8132 main.go:141] libmachine: STDOUT: 
	I0327 16:35:31.295519    8132 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:35:31.295531    8132 client.go:171] duration metric: took 280.723375ms to LocalClient.Create
	I0327 16:35:33.297643    8132 start.go:128] duration metric: took 2.338740958s to createHost
	I0327 16:35:33.297691    8132 start.go:83] releasing machines lock for "multinode-266000", held for 2.339230875s
	W0327 16:35:33.298047    8132 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-266000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-266000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:35:33.308145    8132 out.go:177] 
	W0327 16:35:33.312209    8132 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 16:35:33.312288    8132 out.go:239] * 
	* 
	W0327 16:35:33.314959    8132 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 16:35:33.324171    8132 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-266000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-266000 -n multinode-266000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-266000 -n multinode-266000: exit status 7 (71.784833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-266000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.83s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (107.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-266000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-266000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (61.719292ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-266000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-266000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-266000 -- rollout status deployment/busybox: exit status 1 (59.109083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-266000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-266000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-266000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (58.483583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-266000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-266000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-266000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.033ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-266000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-266000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-266000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.864625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-266000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-266000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-266000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.389917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-266000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-266000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-266000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.837542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-266000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-266000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-266000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.474625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-266000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-266000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-266000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.330375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-266000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-266000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-266000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.061041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-266000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-266000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-266000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.176875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-266000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-266000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-266000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.701959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-266000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-266000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-266000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.247041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-266000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-266000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-266000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (58.963833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-266000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-266000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-266000 -- exec  -- nslookup kubernetes.io: exit status 1 (58.837ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-266000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-266000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-266000 -- exec  -- nslookup kubernetes.default: exit status 1 (58.708625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-266000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-266000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-266000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (58.528708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-266000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-266000 -n multinode-266000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-266000 -n multinode-266000: exit status 7 (32.185ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-266000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (107.42s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-266000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-266000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (58.941584ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-266000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-266000 -n multinode-266000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-266000 -n multinode-266000: exit status 7 (32.09825ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-266000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-266000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-266000 -v 3 --alsologtostderr: exit status 83 (44.7865ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-266000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-266000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:37:20.948359    8228 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:37:20.948512    8228 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:37:20.948515    8228 out.go:304] Setting ErrFile to fd 2...
	I0327 16:37:20.948517    8228 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:37:20.948645    8228 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:37:20.948895    8228 mustload.go:65] Loading cluster: multinode-266000
	I0327 16:37:20.949091    8228 config.go:182] Loaded profile config "multinode-266000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:37:20.953965    8228 out.go:177] * The control-plane node multinode-266000 host is not running: state=Stopped
	I0327 16:37:20.958735    8228 out.go:177]   To start a cluster, run: "minikube start -p multinode-266000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-266000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-266000 -n multinode-266000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-266000 -n multinode-266000: exit status 7 (32.038583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-266000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-266000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-266000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.809417ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-266000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-266000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-266000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-266000 -n multinode-266000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-266000 -n multinode-266000: exit status 7 (31.853125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-266000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-266000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-266000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-266000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"multinode-266000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.29.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-266000 -n multinode-266000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-266000 -n multinode-266000: exit status 7 (31.560667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-266000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-266000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-266000 status --output json --alsologtostderr: exit status 7 (32.055708ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-266000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:37:21.188759    8241 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:37:21.188889    8241 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:37:21.188892    8241 out.go:304] Setting ErrFile to fd 2...
	I0327 16:37:21.188894    8241 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:37:21.189000    8241 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:37:21.189118    8241 out.go:298] Setting JSON to true
	I0327 16:37:21.189129    8241 mustload.go:65] Loading cluster: multinode-266000
	I0327 16:37:21.189182    8241 notify.go:220] Checking for updates...
	I0327 16:37:21.189325    8241 config.go:182] Loaded profile config "multinode-266000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:37:21.189330    8241 status.go:255] checking status of multinode-266000 ...
	I0327 16:37:21.189537    8241 status.go:330] multinode-266000 host status = "Stopped" (err=<nil>)
	I0327 16:37:21.189540    8241 status.go:343] host is not running, skipping remaining checks
	I0327 16:37:21.189543    8241 status.go:257] multinode-266000 status: &{Name:multinode-266000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-266000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-266000 -n multinode-266000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-266000 -n multinode-266000: exit status 7 (32.164167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-266000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-266000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-266000 node stop m03: exit status 85 (48.056417ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-266000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-266000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-266000 status: exit status 7 (32.255125ms)

                                                
                                                
-- stdout --
	multinode-266000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-266000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-266000 status --alsologtostderr: exit status 7 (32.247708ms)

                                                
                                                
-- stdout --
	multinode-266000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:37:21.334272    8249 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:37:21.334387    8249 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:37:21.334390    8249 out.go:304] Setting ErrFile to fd 2...
	I0327 16:37:21.334392    8249 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:37:21.334520    8249 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:37:21.334647    8249 out.go:298] Setting JSON to false
	I0327 16:37:21.334658    8249 mustload.go:65] Loading cluster: multinode-266000
	I0327 16:37:21.334712    8249 notify.go:220] Checking for updates...
	I0327 16:37:21.334876    8249 config.go:182] Loaded profile config "multinode-266000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:37:21.334882    8249 status.go:255] checking status of multinode-266000 ...
	I0327 16:37:21.335093    8249 status.go:330] multinode-266000 host status = "Stopped" (err=<nil>)
	I0327 16:37:21.335096    8249 status.go:343] host is not running, skipping remaining checks
	I0327 16:37:21.335098    8249 status.go:257] multinode-266000 status: &{Name:multinode-266000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-266000 status --alsologtostderr": multinode-266000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-266000 -n multinode-266000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-266000 -n multinode-266000: exit status 7 (31.870125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-266000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (44.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-266000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-266000 node start m03 -v=7 --alsologtostderr: exit status 85 (51.175542ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:37:21.399081    8253 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:37:21.399422    8253 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:37:21.399426    8253 out.go:304] Setting ErrFile to fd 2...
	I0327 16:37:21.399429    8253 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:37:21.399577    8253 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:37:21.399815    8253 mustload.go:65] Loading cluster: multinode-266000
	I0327 16:37:21.399998    8253 config.go:182] Loaded profile config "multinode-266000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:37:21.404358    8253 out.go:177] 
	W0327 16:37:21.408225    8253 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0327 16:37:21.408235    8253 out.go:239] * 
	* 
	W0327 16:37:21.409982    8253 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 16:37:21.414320    8253 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0327 16:37:21.399081    8253 out.go:291] Setting OutFile to fd 1 ...
I0327 16:37:21.399422    8253 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 16:37:21.399426    8253 out.go:304] Setting ErrFile to fd 2...
I0327 16:37:21.399429    8253 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0327 16:37:21.399577    8253 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
I0327 16:37:21.399815    8253 mustload.go:65] Loading cluster: multinode-266000
I0327 16:37:21.399998    8253 config.go:182] Loaded profile config "multinode-266000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0327 16:37:21.404358    8253 out.go:177] 
W0327 16:37:21.408225    8253 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0327 16:37:21.408235    8253 out.go:239] * 
* 
W0327 16:37:21.409982    8253 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0327 16:37:21.414320    8253 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-266000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-266000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-266000 status -v=7 --alsologtostderr: exit status 7 (31.819625ms)

                                                
                                                
-- stdout --
	multinode-266000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:37:21.449563    8255 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:37:21.449683    8255 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:37:21.449686    8255 out.go:304] Setting ErrFile to fd 2...
	I0327 16:37:21.449688    8255 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:37:21.449810    8255 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:37:21.449936    8255 out.go:298] Setting JSON to false
	I0327 16:37:21.449948    8255 mustload.go:65] Loading cluster: multinode-266000
	I0327 16:37:21.449997    8255 notify.go:220] Checking for updates...
	I0327 16:37:21.450155    8255 config.go:182] Loaded profile config "multinode-266000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:37:21.450161    8255 status.go:255] checking status of multinode-266000 ...
	I0327 16:37:21.450373    8255 status.go:330] multinode-266000 host status = "Stopped" (err=<nil>)
	I0327 16:37:21.450377    8255 status.go:343] host is not running, skipping remaining checks
	I0327 16:37:21.450380    8255 status.go:257] multinode-266000 status: &{Name:multinode-266000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-266000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-266000 status -v=7 --alsologtostderr: exit status 7 (77.444042ms)

                                                
                                                
-- stdout --
	multinode-266000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:37:22.147311    8257 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:37:22.147464    8257 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:37:22.147468    8257 out.go:304] Setting ErrFile to fd 2...
	I0327 16:37:22.147471    8257 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:37:22.147645    8257 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:37:22.147804    8257 out.go:298] Setting JSON to false
	I0327 16:37:22.147817    8257 mustload.go:65] Loading cluster: multinode-266000
	I0327 16:37:22.147854    8257 notify.go:220] Checking for updates...
	I0327 16:37:22.148038    8257 config.go:182] Loaded profile config "multinode-266000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:37:22.148044    8257 status.go:255] checking status of multinode-266000 ...
	I0327 16:37:22.148290    8257 status.go:330] multinode-266000 host status = "Stopped" (err=<nil>)
	I0327 16:37:22.148294    8257 status.go:343] host is not running, skipping remaining checks
	I0327 16:37:22.148297    8257 status.go:257] multinode-266000 status: &{Name:multinode-266000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-266000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-266000 status -v=7 --alsologtostderr: exit status 7 (78.931084ms)

                                                
                                                
-- stdout --
	multinode-266000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:37:24.314395    8259 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:37:24.314563    8259 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:37:24.314567    8259 out.go:304] Setting ErrFile to fd 2...
	I0327 16:37:24.314570    8259 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:37:24.314728    8259 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:37:24.314891    8259 out.go:298] Setting JSON to false
	I0327 16:37:24.314909    8259 mustload.go:65] Loading cluster: multinode-266000
	I0327 16:37:24.314961    8259 notify.go:220] Checking for updates...
	I0327 16:37:24.315173    8259 config.go:182] Loaded profile config "multinode-266000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:37:24.315179    8259 status.go:255] checking status of multinode-266000 ...
	I0327 16:37:24.315454    8259 status.go:330] multinode-266000 host status = "Stopped" (err=<nil>)
	I0327 16:37:24.315459    8259 status.go:343] host is not running, skipping remaining checks
	I0327 16:37:24.315462    8259 status.go:257] multinode-266000 status: &{Name:multinode-266000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-266000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-266000 status -v=7 --alsologtostderr: exit status 7 (76.1765ms)

                                                
                                                
-- stdout --
	multinode-266000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:37:27.281194    8261 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:37:27.281369    8261 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:37:27.281373    8261 out.go:304] Setting ErrFile to fd 2...
	I0327 16:37:27.281376    8261 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:37:27.281521    8261 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:37:27.281659    8261 out.go:298] Setting JSON to false
	I0327 16:37:27.281673    8261 mustload.go:65] Loading cluster: multinode-266000
	I0327 16:37:27.281713    8261 notify.go:220] Checking for updates...
	I0327 16:37:27.281895    8261 config.go:182] Loaded profile config "multinode-266000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:37:27.281902    8261 status.go:255] checking status of multinode-266000 ...
	I0327 16:37:27.282152    8261 status.go:330] multinode-266000 host status = "Stopped" (err=<nil>)
	I0327 16:37:27.282157    8261 status.go:343] host is not running, skipping remaining checks
	I0327 16:37:27.282160    8261 status.go:257] multinode-266000 status: &{Name:multinode-266000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-266000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-266000 status -v=7 --alsologtostderr: exit status 7 (76.930541ms)

                                                
                                                
-- stdout --
	multinode-266000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:37:30.976271    8263 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:37:30.976480    8263 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:37:30.976485    8263 out.go:304] Setting ErrFile to fd 2...
	I0327 16:37:30.976488    8263 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:37:30.976686    8263 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:37:30.976871    8263 out.go:298] Setting JSON to false
	I0327 16:37:30.976887    8263 mustload.go:65] Loading cluster: multinode-266000
	I0327 16:37:30.976928    8263 notify.go:220] Checking for updates...
	I0327 16:37:30.977168    8263 config.go:182] Loaded profile config "multinode-266000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:37:30.977176    8263 status.go:255] checking status of multinode-266000 ...
	I0327 16:37:30.977501    8263 status.go:330] multinode-266000 host status = "Stopped" (err=<nil>)
	I0327 16:37:30.977507    8263 status.go:343] host is not running, skipping remaining checks
	I0327 16:37:30.977510    8263 status.go:257] multinode-266000 status: &{Name:multinode-266000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-266000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-266000 status -v=7 --alsologtostderr: exit status 7 (75.838917ms)

                                                
                                                
-- stdout --
	multinode-266000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:37:37.187065    8268 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:37:37.187245    8268 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:37:37.187249    8268 out.go:304] Setting ErrFile to fd 2...
	I0327 16:37:37.187252    8268 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:37:37.187410    8268 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:37:37.187569    8268 out.go:298] Setting JSON to false
	I0327 16:37:37.187584    8268 mustload.go:65] Loading cluster: multinode-266000
	I0327 16:37:37.187623    8268 notify.go:220] Checking for updates...
	I0327 16:37:37.187862    8268 config.go:182] Loaded profile config "multinode-266000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:37:37.187870    8268 status.go:255] checking status of multinode-266000 ...
	I0327 16:37:37.188155    8268 status.go:330] multinode-266000 host status = "Stopped" (err=<nil>)
	I0327 16:37:37.188160    8268 status.go:343] host is not running, skipping remaining checks
	I0327 16:37:37.188163    8268 status.go:257] multinode-266000 status: &{Name:multinode-266000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-266000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-266000 status -v=7 --alsologtostderr: exit status 7 (75.967ms)

                                                
                                                
-- stdout --
	multinode-266000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:37:41.228306    8270 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:37:41.228486    8270 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:37:41.228490    8270 out.go:304] Setting ErrFile to fd 2...
	I0327 16:37:41.228493    8270 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:37:41.228628    8270 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:37:41.228763    8270 out.go:298] Setting JSON to false
	I0327 16:37:41.228778    8270 mustload.go:65] Loading cluster: multinode-266000
	I0327 16:37:41.228808    8270 notify.go:220] Checking for updates...
	I0327 16:37:41.229005    8270 config.go:182] Loaded profile config "multinode-266000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:37:41.229012    8270 status.go:255] checking status of multinode-266000 ...
	I0327 16:37:41.229262    8270 status.go:330] multinode-266000 host status = "Stopped" (err=<nil>)
	I0327 16:37:41.229266    8270 status.go:343] host is not running, skipping remaining checks
	I0327 16:37:41.229269    8270 status.go:257] multinode-266000 status: &{Name:multinode-266000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-266000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-266000 status -v=7 --alsologtostderr: exit status 7 (78.394292ms)

                                                
                                                
-- stdout --
	multinode-266000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:37:47.802231    8272 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:37:47.802417    8272 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:37:47.802422    8272 out.go:304] Setting ErrFile to fd 2...
	I0327 16:37:47.802425    8272 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:37:47.802592    8272 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:37:47.802759    8272 out.go:298] Setting JSON to false
	I0327 16:37:47.802775    8272 mustload.go:65] Loading cluster: multinode-266000
	I0327 16:37:47.802804    8272 notify.go:220] Checking for updates...
	I0327 16:37:47.803056    8272 config.go:182] Loaded profile config "multinode-266000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:37:47.803064    8272 status.go:255] checking status of multinode-266000 ...
	I0327 16:37:47.803310    8272 status.go:330] multinode-266000 host status = "Stopped" (err=<nil>)
	I0327 16:37:47.803314    8272 status.go:343] host is not running, skipping remaining checks
	I0327 16:37:47.803317    8272 status.go:257] multinode-266000 status: &{Name:multinode-266000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-266000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-266000 status -v=7 --alsologtostderr: exit status 7 (77.751375ms)

                                                
                                                
-- stdout --
	multinode-266000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:38:05.799471    8274 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:38:05.799696    8274 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:38:05.799700    8274 out.go:304] Setting ErrFile to fd 2...
	I0327 16:38:05.799703    8274 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:38:05.799918    8274 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:38:05.800078    8274 out.go:298] Setting JSON to false
	I0327 16:38:05.800094    8274 mustload.go:65] Loading cluster: multinode-266000
	I0327 16:38:05.800125    8274 notify.go:220] Checking for updates...
	I0327 16:38:05.800368    8274 config.go:182] Loaded profile config "multinode-266000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:38:05.800375    8274 status.go:255] checking status of multinode-266000 ...
	I0327 16:38:05.800648    8274 status.go:330] multinode-266000 host status = "Stopped" (err=<nil>)
	I0327 16:38:05.800653    8274 status.go:343] host is not running, skipping remaining checks
	I0327 16:38:05.800656    8274 status.go:257] multinode-266000 status: &{Name:multinode-266000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-266000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-266000 -n multinode-266000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-266000 -n multinode-266000: exit status 7 (34.314292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-266000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (44.47s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (7.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-266000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-266000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-266000: (1.968145417s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-266000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-266000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.218930792s)

                                                
                                                
-- stdout --
	* [multinode-266000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-266000" primary control-plane node in "multinode-266000" cluster
	* Restarting existing qemu2 VM for "multinode-266000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-266000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:38:07.903870    8292 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:38:07.904036    8292 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:38:07.904040    8292 out.go:304] Setting ErrFile to fd 2...
	I0327 16:38:07.904043    8292 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:38:07.904207    8292 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:38:07.905290    8292 out.go:298] Setting JSON to false
	I0327 16:38:07.924077    8292 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5858,"bootTime":1711576829,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 16:38:07.924149    8292 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 16:38:07.928256    8292 out.go:177] * [multinode-266000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 16:38:07.935254    8292 out.go:177]   - MINIKUBE_LOCATION=18485
	I0327 16:38:07.939236    8292 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:38:07.935300    8292 notify.go:220] Checking for updates...
	I0327 16:38:07.942203    8292 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 16:38:07.945240    8292 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 16:38:07.948221    8292 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	I0327 16:38:07.951238    8292 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 16:38:07.954606    8292 config.go:182] Loaded profile config "multinode-266000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:38:07.954666    8292 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 16:38:07.958215    8292 out.go:177] * Using the qemu2 driver based on existing profile
	I0327 16:38:07.965230    8292 start.go:297] selected driver: qemu2
	I0327 16:38:07.965235    8292 start.go:901] validating driver "qemu2" against &{Name:multinode-266000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:multinode-266000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:38:07.965287    8292 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 16:38:07.967507    8292 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 16:38:07.967549    8292 cni.go:84] Creating CNI manager for ""
	I0327 16:38:07.967555    8292 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0327 16:38:07.967600    8292 start.go:340] cluster config:
	{Name:multinode-266000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-266000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:38:07.972001    8292 iso.go:125] acquiring lock: {Name:mk26a7f41e004242919e6d5076ec0f9645f820ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:38:07.979241    8292 out.go:177] * Starting "multinode-266000" primary control-plane node in "multinode-266000" cluster
	I0327 16:38:07.983256    8292 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 16:38:07.983272    8292 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 16:38:07.983281    8292 cache.go:56] Caching tarball of preloaded images
	I0327 16:38:07.983372    8292 preload.go:173] Found /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 16:38:07.983378    8292 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 16:38:07.983471    8292 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/multinode-266000/config.json ...
	I0327 16:38:07.983931    8292 start.go:360] acquireMachinesLock for multinode-266000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:38:07.983965    8292 start.go:364] duration metric: took 27.208µs to acquireMachinesLock for "multinode-266000"
	I0327 16:38:07.983975    8292 start.go:96] Skipping create...Using existing machine configuration
	I0327 16:38:07.983982    8292 fix.go:54] fixHost starting: 
	I0327 16:38:07.984101    8292 fix.go:112] recreateIfNeeded on multinode-266000: state=Stopped err=<nil>
	W0327 16:38:07.984111    8292 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 16:38:07.992280    8292 out.go:177] * Restarting existing qemu2 VM for "multinode-266000" ...
	I0327 16:38:07.996215    8292 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/multinode-266000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/multinode-266000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/multinode-266000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:be:4d:47:a9:5c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/multinode-266000/disk.qcow2
	I0327 16:38:07.998480    8292 main.go:141] libmachine: STDOUT: 
	I0327 16:38:07.998506    8292 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:38:07.998538    8292 fix.go:56] duration metric: took 14.555833ms for fixHost
	I0327 16:38:07.998543    8292 start.go:83] releasing machines lock for "multinode-266000", held for 14.574459ms
	W0327 16:38:07.998555    8292 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 16:38:07.998603    8292 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:38:07.998608    8292 start.go:728] Will try again in 5 seconds ...
	I0327 16:38:13.000648    8292 start.go:360] acquireMachinesLock for multinode-266000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:38:13.000963    8292 start.go:364] duration metric: took 241.125µs to acquireMachinesLock for "multinode-266000"
	I0327 16:38:13.001091    8292 start.go:96] Skipping create...Using existing machine configuration
	I0327 16:38:13.001109    8292 fix.go:54] fixHost starting: 
	I0327 16:38:13.001784    8292 fix.go:112] recreateIfNeeded on multinode-266000: state=Stopped err=<nil>
	W0327 16:38:13.001818    8292 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 16:38:13.007265    8292 out.go:177] * Restarting existing qemu2 VM for "multinode-266000" ...
	I0327 16:38:13.010324    8292 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/multinode-266000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/multinode-266000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/multinode-266000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:be:4d:47:a9:5c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/multinode-266000/disk.qcow2
	I0327 16:38:13.019198    8292 main.go:141] libmachine: STDOUT: 
	I0327 16:38:13.019270    8292 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:38:13.019338    8292 fix.go:56] duration metric: took 18.231292ms for fixHost
	I0327 16:38:13.019358    8292 start.go:83] releasing machines lock for "multinode-266000", held for 18.376875ms
	W0327 16:38:13.019532    8292 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-266000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-266000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:38:13.027164    8292 out.go:177] 
	W0327 16:38:13.031276    8292 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 16:38:13.031310    8292 out.go:239] * 
	* 
	W0327 16:38:13.033845    8292 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 16:38:13.041193    8292 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-266000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-266000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-266000 -n multinode-266000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-266000 -n multinode-266000: exit status 7 (34.036542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-266000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (7.33s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-266000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-266000 node delete m03: exit status 83 (42.53925ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-266000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-266000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-266000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-266000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-266000 status --alsologtostderr: exit status 7 (32.149084ms)

                                                
                                                
-- stdout --
	multinode-266000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:38:13.235747    8306 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:38:13.235886    8306 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:38:13.235889    8306 out.go:304] Setting ErrFile to fd 2...
	I0327 16:38:13.235891    8306 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:38:13.236024    8306 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:38:13.236139    8306 out.go:298] Setting JSON to false
	I0327 16:38:13.236151    8306 mustload.go:65] Loading cluster: multinode-266000
	I0327 16:38:13.236207    8306 notify.go:220] Checking for updates...
	I0327 16:38:13.236333    8306 config.go:182] Loaded profile config "multinode-266000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:38:13.236343    8306 status.go:255] checking status of multinode-266000 ...
	I0327 16:38:13.236551    8306 status.go:330] multinode-266000 host status = "Stopped" (err=<nil>)
	I0327 16:38:13.236555    8306 status.go:343] host is not running, skipping remaining checks
	I0327 16:38:13.236557    8306 status.go:257] multinode-266000 status: &{Name:multinode-266000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-266000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-266000 -n multinode-266000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-266000 -n multinode-266000: exit status 7 (31.891875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-266000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-266000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-266000 stop: (3.238429208s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-266000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-266000 status: exit status 7 (66.975333ms)

                                                
                                                
-- stdout --
	multinode-266000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-266000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-266000 status --alsologtostderr: exit status 7 (33.892916ms)

                                                
                                                
-- stdout --
	multinode-266000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:38:16.607476    8330 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:38:16.607603    8330 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:38:16.607606    8330 out.go:304] Setting ErrFile to fd 2...
	I0327 16:38:16.607608    8330 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:38:16.607727    8330 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:38:16.607845    8330 out.go:298] Setting JSON to false
	I0327 16:38:16.607857    8330 mustload.go:65] Loading cluster: multinode-266000
	I0327 16:38:16.607923    8330 notify.go:220] Checking for updates...
	I0327 16:38:16.608082    8330 config.go:182] Loaded profile config "multinode-266000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:38:16.608088    8330 status.go:255] checking status of multinode-266000 ...
	I0327 16:38:16.608291    8330 status.go:330] multinode-266000 host status = "Stopped" (err=<nil>)
	I0327 16:38:16.608295    8330 status.go:343] host is not running, skipping remaining checks
	I0327 16:38:16.608298    8330 status.go:257] multinode-266000 status: &{Name:multinode-266000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-266000 status --alsologtostderr": multinode-266000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-266000 status --alsologtostderr": multinode-266000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-266000 -n multinode-266000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-266000 -n multinode-266000: exit status 7 (32.233542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-266000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.37s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-266000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-266000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.181094458s)

                                                
                                                
-- stdout --
	* [multinode-266000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-266000" primary control-plane node in "multinode-266000" cluster
	* Restarting existing qemu2 VM for "multinode-266000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-266000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:38:16.671243    8334 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:38:16.671356    8334 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:38:16.671358    8334 out.go:304] Setting ErrFile to fd 2...
	I0327 16:38:16.671361    8334 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:38:16.671478    8334 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:38:16.672452    8334 out.go:298] Setting JSON to false
	I0327 16:38:16.688479    8334 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5867,"bootTime":1711576829,"procs":473,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 16:38:16.688540    8334 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 16:38:16.692272    8334 out.go:177] * [multinode-266000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 16:38:16.699077    8334 out.go:177]   - MINIKUBE_LOCATION=18485
	I0327 16:38:16.703179    8334 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:38:16.699114    8334 notify.go:220] Checking for updates...
	I0327 16:38:16.707642    8334 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 16:38:16.711169    8334 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 16:38:16.714196    8334 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	I0327 16:38:16.717234    8334 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 16:38:16.720419    8334 config.go:182] Loaded profile config "multinode-266000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:38:16.720676    8334 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 16:38:16.725162    8334 out.go:177] * Using the qemu2 driver based on existing profile
	I0327 16:38:16.732152    8334 start.go:297] selected driver: qemu2
	I0327 16:38:16.732159    8334 start.go:901] validating driver "qemu2" against &{Name:multinode-266000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:multinode-266000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:38:16.732221    8334 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 16:38:16.734363    8334 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 16:38:16.734407    8334 cni.go:84] Creating CNI manager for ""
	I0327 16:38:16.734411    8334 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0327 16:38:16.734461    8334 start.go:340] cluster config:
	{Name:multinode-266000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-266000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:38:16.738792    8334 iso.go:125] acquiring lock: {Name:mk26a7f41e004242919e6d5076ec0f9645f820ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:38:16.745137    8334 out.go:177] * Starting "multinode-266000" primary control-plane node in "multinode-266000" cluster
	I0327 16:38:16.749134    8334 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 16:38:16.749148    8334 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 16:38:16.749159    8334 cache.go:56] Caching tarball of preloaded images
	I0327 16:38:16.749215    8334 preload.go:173] Found /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 16:38:16.749220    8334 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 16:38:16.749295    8334 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/multinode-266000/config.json ...
	I0327 16:38:16.749764    8334 start.go:360] acquireMachinesLock for multinode-266000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:38:16.749789    8334 start.go:364] duration metric: took 20µs to acquireMachinesLock for "multinode-266000"
	I0327 16:38:16.749799    8334 start.go:96] Skipping create...Using existing machine configuration
	I0327 16:38:16.749804    8334 fix.go:54] fixHost starting: 
	I0327 16:38:16.749922    8334 fix.go:112] recreateIfNeeded on multinode-266000: state=Stopped err=<nil>
	W0327 16:38:16.749932    8334 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 16:38:16.753112    8334 out.go:177] * Restarting existing qemu2 VM for "multinode-266000" ...
	I0327 16:38:16.761162    8334 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/multinode-266000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/multinode-266000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/multinode-266000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:be:4d:47:a9:5c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/multinode-266000/disk.qcow2
	I0327 16:38:16.763060    8334 main.go:141] libmachine: STDOUT: 
	I0327 16:38:16.763080    8334 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:38:16.763113    8334 fix.go:56] duration metric: took 13.310041ms for fixHost
	I0327 16:38:16.763118    8334 start.go:83] releasing machines lock for "multinode-266000", held for 13.324292ms
	W0327 16:38:16.763125    8334 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 16:38:16.763157    8334 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:38:16.763161    8334 start.go:728] Will try again in 5 seconds ...
	I0327 16:38:21.763774    8334 start.go:360] acquireMachinesLock for multinode-266000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:38:21.764186    8334 start.go:364] duration metric: took 320.5µs to acquireMachinesLock for "multinode-266000"
	I0327 16:38:21.764308    8334 start.go:96] Skipping create...Using existing machine configuration
	I0327 16:38:21.764331    8334 fix.go:54] fixHost starting: 
	I0327 16:38:21.765121    8334 fix.go:112] recreateIfNeeded on multinode-266000: state=Stopped err=<nil>
	W0327 16:38:21.765149    8334 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 16:38:21.770561    8334 out.go:177] * Restarting existing qemu2 VM for "multinode-266000" ...
	I0327 16:38:21.774729    8334 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/multinode-266000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/multinode-266000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/multinode-266000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:be:4d:47:a9:5c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/multinode-266000/disk.qcow2
	I0327 16:38:21.785365    8334 main.go:141] libmachine: STDOUT: 
	I0327 16:38:21.785445    8334 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:38:21.785540    8334 fix.go:56] duration metric: took 21.211541ms for fixHost
	I0327 16:38:21.785559    8334 start.go:83] releasing machines lock for "multinode-266000", held for 21.348958ms
	W0327 16:38:21.785783    8334 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-266000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-266000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:38:21.794551    8334 out.go:177] 
	W0327 16:38:21.797601    8334 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 16:38:21.797627    8334 out.go:239] * 
	* 
	W0327 16:38:21.800244    8334 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 16:38:21.809550    8334 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-266000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-266000 -n multinode-266000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-266000 -n multinode-266000: exit status 7 (70.963792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-266000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-266000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-266000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-266000-m01 --driver=qemu2 : exit status 80 (9.836768208s)

                                                
                                                
-- stdout --
	* [multinode-266000-m01] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-266000-m01" primary control-plane node in "multinode-266000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-266000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-266000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-266000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-266000-m02 --driver=qemu2 : exit status 80 (10.013424833s)

                                                
                                                
-- stdout --
	* [multinode-266000-m02] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-266000-m02" primary control-plane node in "multinode-266000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-266000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-266000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-266000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-266000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-266000: exit status 83 (83.360958ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-266000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-266000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-266000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-266000 -n multinode-266000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-266000 -n multinode-266000: exit status 7 (32.7415ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-266000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.11s)

                                                
                                    
x
+
TestPreload (10.13s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-329000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-329000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.952602833s)

                                                
                                                
-- stdout --
	* [test-preload-329000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-329000" primary control-plane node in "test-preload-329000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-329000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:38:42.168729    8397 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:38:42.168886    8397 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:38:42.168889    8397 out.go:304] Setting ErrFile to fd 2...
	I0327 16:38:42.168892    8397 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:38:42.169016    8397 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:38:42.170034    8397 out.go:298] Setting JSON to false
	I0327 16:38:42.186585    8397 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5893,"bootTime":1711576829,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 16:38:42.186643    8397 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 16:38:42.192183    8397 out.go:177] * [test-preload-329000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 16:38:42.198183    8397 out.go:177]   - MINIKUBE_LOCATION=18485
	I0327 16:38:42.198246    8397 notify.go:220] Checking for updates...
	I0327 16:38:42.206155    8397 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:38:42.209116    8397 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 16:38:42.212130    8397 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 16:38:42.215129    8397 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	I0327 16:38:42.218189    8397 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 16:38:42.221402    8397 config.go:182] Loaded profile config "multinode-266000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:38:42.221450    8397 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 16:38:42.226138    8397 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 16:38:42.233092    8397 start.go:297] selected driver: qemu2
	I0327 16:38:42.233097    8397 start.go:901] validating driver "qemu2" against <nil>
	I0327 16:38:42.233102    8397 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 16:38:42.235285    8397 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 16:38:42.238152    8397 out.go:177] * Automatically selected the socket_vmnet network
	I0327 16:38:42.241173    8397 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 16:38:42.241208    8397 cni.go:84] Creating CNI manager for ""
	I0327 16:38:42.241215    8397 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 16:38:42.241222    8397 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 16:38:42.241268    8397 start.go:340] cluster config:
	{Name:test-preload-329000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-329000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:38:42.246091    8397 iso.go:125] acquiring lock: {Name:mk26a7f41e004242919e6d5076ec0f9645f820ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:38:42.254100    8397 out.go:177] * Starting "test-preload-329000" primary control-plane node in "test-preload-329000" cluster
	I0327 16:38:42.258108    8397 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0327 16:38:42.258209    8397 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/test-preload-329000/config.json ...
	I0327 16:38:42.258230    8397 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/test-preload-329000/config.json: {Name:mk678b442c473075af7304a213defaf858d32f7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 16:38:42.258237    8397 cache.go:107] acquiring lock: {Name:mkd1d70464593d2de61c953478f73c530478a3b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:38:42.258243    8397 cache.go:107] acquiring lock: {Name:mk6a81e1e3dd88a2a0389ef0a64b9a2e49efa8b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:38:42.258249    8397 cache.go:107] acquiring lock: {Name:mke7c31d8fb34646e9876faf99d5c3370fe9e507 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:38:42.258274    8397 cache.go:107] acquiring lock: {Name:mkabc67791dc58b891eec2c957fd368e72d149d8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:38:42.258388    8397 cache.go:107] acquiring lock: {Name:mkc564cfc89cd4573317d71eab75125d5ffc4b7e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:38:42.258430    8397 cache.go:107] acquiring lock: {Name:mk47217ac0e0742e5d37c7dbbf9a8c4009ecbe90 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:38:42.258495    8397 cache.go:107] acquiring lock: {Name:mk9956f5292a4249494f22e92c99ab919b49fa7c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:38:42.258623    8397 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0327 16:38:42.258639    8397 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0327 16:38:42.258656    8397 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 16:38:42.258673    8397 start.go:360] acquireMachinesLock for test-preload-329000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:38:42.258498    8397 cache.go:107] acquiring lock: {Name:mk86979c4a8ad83a669b2cbff25c6c898ded53b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:38:42.258745    8397 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0327 16:38:42.258746    8397 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0327 16:38:42.258755    8397 start.go:364] duration metric: took 67.916µs to acquireMachinesLock for "test-preload-329000"
	I0327 16:38:42.258816    8397 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0327 16:38:42.258772    8397 start.go:93] Provisioning new machine with config: &{Name:test-preload-329000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-329000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 16:38:42.258824    8397 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 16:38:42.262127    8397 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 16:38:42.258863    8397 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0327 16:38:42.258945    8397 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0327 16:38:42.269633    8397 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0327 16:38:42.269731    8397 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0327 16:38:42.269805    8397 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0327 16:38:42.270316    8397 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0327 16:38:42.274705    8397 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0327 16:38:42.274909    8397 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0327 16:38:42.274982    8397 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 16:38:42.275091    8397 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0327 16:38:42.280517    8397 start.go:159] libmachine.API.Create for "test-preload-329000" (driver="qemu2")
	I0327 16:38:42.280537    8397 client.go:168] LocalClient.Create starting
	I0327 16:38:42.280602    8397 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem
	I0327 16:38:42.280636    8397 main.go:141] libmachine: Decoding PEM data...
	I0327 16:38:42.280646    8397 main.go:141] libmachine: Parsing certificate...
	I0327 16:38:42.280691    8397 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem
	I0327 16:38:42.280716    8397 main.go:141] libmachine: Decoding PEM data...
	I0327 16:38:42.280722    8397 main.go:141] libmachine: Parsing certificate...
	I0327 16:38:42.281258    8397 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18485-6511/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0327 16:38:42.424161    8397 main.go:141] libmachine: Creating SSH key...
	I0327 16:38:42.580250    8397 main.go:141] libmachine: Creating Disk image...
	I0327 16:38:42.580272    8397 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 16:38:42.580444    8397 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/test-preload-329000/disk.qcow2.raw /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/test-preload-329000/disk.qcow2
	I0327 16:38:42.593574    8397 main.go:141] libmachine: STDOUT: 
	I0327 16:38:42.593598    8397 main.go:141] libmachine: STDERR: 
	I0327 16:38:42.593652    8397 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/test-preload-329000/disk.qcow2 +20000M
	I0327 16:38:42.605321    8397 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 16:38:42.605341    8397 main.go:141] libmachine: STDERR: 
	I0327 16:38:42.605360    8397 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/test-preload-329000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/test-preload-329000/disk.qcow2
	I0327 16:38:42.605364    8397 main.go:141] libmachine: Starting QEMU VM...
	I0327 16:38:42.605400    8397 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/test-preload-329000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/test-preload-329000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/test-preload-329000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:00:81:aa:25:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/test-preload-329000/disk.qcow2
	I0327 16:38:42.607425    8397 main.go:141] libmachine: STDOUT: 
	I0327 16:38:42.607442    8397 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:38:42.607459    8397 client.go:171] duration metric: took 326.927041ms to LocalClient.Create
	I0327 16:38:44.218034    8397 cache.go:162] opening:  /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W0327 16:38:44.286885    8397 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0327 16:38:44.286973    8397 cache.go:162] opening:  /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0327 16:38:44.334938    8397 cache.go:162] opening:  /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0327 16:38:44.365704    8397 cache.go:162] opening:  /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0327 16:38:44.372015    8397 cache.go:162] opening:  /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0327 16:38:44.378443    8397 cache.go:162] opening:  /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0327 16:38:44.382242    8397 cache.go:162] opening:  /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0327 16:38:44.478045    8397 cache.go:157] /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0327 16:38:44.478101    8397 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 2.219721792s
	I0327 16:38:44.478145    8397 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I0327 16:38:44.607583    8397 start.go:128] duration metric: took 2.348801958s to createHost
	I0327 16:38:44.607629    8397 start.go:83] releasing machines lock for "test-preload-329000", held for 2.348934375s
	W0327 16:38:44.607703    8397 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:38:44.619600    8397 out.go:177] * Deleting "test-preload-329000" in qemu2 ...
	W0327 16:38:44.647332    8397 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:38:44.647373    8397 start.go:728] Will try again in 5 seconds ...
	W0327 16:38:44.981089    8397 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0327 16:38:44.981227    8397 cache.go:162] opening:  /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0327 16:38:45.627939    8397 cache.go:157] /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0327 16:38:45.627997    8397 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.369717292s
	I0327 16:38:45.628021    8397 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0327 16:38:46.340868    8397 cache.go:157] /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0327 16:38:46.340928    8397 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 4.082808375s
	I0327 16:38:46.340954    8397 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0327 16:38:46.747500    8397 cache.go:157] /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0327 16:38:46.747548    8397 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 4.48944125s
	I0327 16:38:46.747572    8397 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0327 16:38:46.802529    8397 cache.go:157] /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0327 16:38:46.802577    8397 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 4.54446675s
	I0327 16:38:46.802601    8397 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0327 16:38:48.909763    8397 cache.go:157] /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0327 16:38:48.909809    8397 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 6.651606417s
	I0327 16:38:48.909835    8397 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0327 16:38:49.187340    8397 cache.go:157] /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0327 16:38:49.187390    8397 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.929318166s
	I0327 16:38:49.187414    8397 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0327 16:38:49.647699    8397 start.go:360] acquireMachinesLock for test-preload-329000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:38:49.648115    8397 start.go:364] duration metric: took 316.625µs to acquireMachinesLock for "test-preload-329000"
	I0327 16:38:49.648264    8397 start.go:93] Provisioning new machine with config: &{Name:test-preload-329000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-329000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 16:38:49.648523    8397 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 16:38:49.660383    8397 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 16:38:49.710481    8397 start.go:159] libmachine.API.Create for "test-preload-329000" (driver="qemu2")
	I0327 16:38:49.710540    8397 client.go:168] LocalClient.Create starting
	I0327 16:38:49.710655    8397 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem
	I0327 16:38:49.710715    8397 main.go:141] libmachine: Decoding PEM data...
	I0327 16:38:49.710740    8397 main.go:141] libmachine: Parsing certificate...
	I0327 16:38:49.710797    8397 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem
	I0327 16:38:49.710838    8397 main.go:141] libmachine: Decoding PEM data...
	I0327 16:38:49.710848    8397 main.go:141] libmachine: Parsing certificate...
	I0327 16:38:49.711391    8397 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18485-6511/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0327 16:38:49.861034    8397 main.go:141] libmachine: Creating SSH key...
	I0327 16:38:50.018333    8397 main.go:141] libmachine: Creating Disk image...
	I0327 16:38:50.018340    8397 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 16:38:50.018555    8397 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/test-preload-329000/disk.qcow2.raw /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/test-preload-329000/disk.qcow2
	I0327 16:38:50.031326    8397 main.go:141] libmachine: STDOUT: 
	I0327 16:38:50.031354    8397 main.go:141] libmachine: STDERR: 
	I0327 16:38:50.031400    8397 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/test-preload-329000/disk.qcow2 +20000M
	I0327 16:38:50.042395    8397 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 16:38:50.042420    8397 main.go:141] libmachine: STDERR: 
	I0327 16:38:50.042435    8397 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/test-preload-329000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/test-preload-329000/disk.qcow2
	I0327 16:38:50.042439    8397 main.go:141] libmachine: Starting QEMU VM...
	I0327 16:38:50.042480    8397 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/test-preload-329000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/test-preload-329000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/test-preload-329000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0a:de:ba:5c:a8:b8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/test-preload-329000/disk.qcow2
	I0327 16:38:50.044416    8397 main.go:141] libmachine: STDOUT: 
	I0327 16:38:50.044439    8397 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:38:50.044451    8397 client.go:171] duration metric: took 333.915541ms to LocalClient.Create
	I0327 16:38:52.044622    8397 start.go:128] duration metric: took 2.396113375s to createHost
	I0327 16:38:52.044716    8397 start.go:83] releasing machines lock for "test-preload-329000", held for 2.396644166s
	W0327 16:38:52.045023    8397 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-329000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-329000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:38:52.059604    8397 out.go:177] 
	W0327 16:38:52.062587    8397 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 16:38:52.062615    8397 out.go:239] * 
	* 
	W0327 16:38:52.065490    8397 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 16:38:52.074539    8397 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-329000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-03-27 16:38:52.093815 -0700 PDT m=+753.575887835
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-329000 -n test-preload-329000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-329000 -n test-preload-329000: exit status 7 (69.985417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-329000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-329000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-329000
--- FAIL: TestPreload (10.13s)

                                                
                                    
x
+
TestScheduledStopUnix (9.93s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-930000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-930000 --memory=2048 --driver=qemu2 : exit status 80 (9.759777875s)

                                                
                                                
-- stdout --
	* [scheduled-stop-930000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-930000" primary control-plane node in "scheduled-stop-930000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-930000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-930000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-930000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-930000" primary control-plane node in "scheduled-stop-930000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-930000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-930000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-03-27 16:39:02.030132 -0700 PDT m=+763.512501418
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-930000 -n scheduled-stop-930000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-930000 -n scheduled-stop-930000: exit status 7 (69.304125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-930000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-930000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-930000
--- FAIL: TestScheduledStopUnix (9.93s)

                                                
                                    
x
+
TestSkaffold (16.51s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe594294722 version
skaffold_test.go:59: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe594294722 version: (1.03565525s)
skaffold_test.go:63: skaffold version: v2.10.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-635000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-635000 --memory=2600 --driver=qemu2 : exit status 80 (9.771975209s)

                                                
                                                
-- stdout --
	* [skaffold-635000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-635000" primary control-plane node in "skaffold-635000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-635000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-635000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-635000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-635000" primary control-plane node in "skaffold-635000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-635000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-635000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-03-27 16:39:18.537472 -0700 PDT m=+780.020334126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-635000 -n skaffold-635000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-635000 -n skaffold-635000: exit status 7 (66.131958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-635000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-635000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-635000
--- FAIL: TestSkaffold (16.51s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (620.2s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.644261284 start -p running-upgrade-400000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.644261284 start -p running-upgrade-400000 --memory=2200 --vm-driver=qemu2 : (1m18.745763083s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-400000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-400000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m24.246333708s)

                                                
                                                
-- stdout --
	* [running-upgrade-400000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-400000" primary control-plane node in "running-upgrade-400000" cluster
	* Updating the running qemu2 "running-upgrade-400000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:41:22.875900    8800 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:41:22.876023    8800 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:41:22.876027    8800 out.go:304] Setting ErrFile to fd 2...
	I0327 16:41:22.876029    8800 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:41:22.876162    8800 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:41:22.877073    8800 out.go:298] Setting JSON to false
	I0327 16:41:22.894562    8800 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6053,"bootTime":1711576829,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 16:41:22.894627    8800 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 16:41:22.899783    8800 out.go:177] * [running-upgrade-400000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 16:41:22.905739    8800 out.go:177]   - MINIKUBE_LOCATION=18485
	I0327 16:41:22.905828    8800 notify.go:220] Checking for updates...
	I0327 16:41:22.912670    8800 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:41:22.915717    8800 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 16:41:22.918775    8800 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 16:41:22.921707    8800 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	I0327 16:41:22.924715    8800 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 16:41:22.927964    8800 config.go:182] Loaded profile config "running-upgrade-400000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 16:41:22.931735    8800 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0327 16:41:22.934692    8800 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 16:41:22.937789    8800 out.go:177] * Using the qemu2 driver based on existing profile
	I0327 16:41:22.944708    8800 start.go:297] selected driver: qemu2
	I0327 16:41:22.944712    8800 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-400000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51212 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-400000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0327 16:41:22.944753    8800 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 16:41:22.947275    8800 cni.go:84] Creating CNI manager for ""
	I0327 16:41:22.947292    8800 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 16:41:22.947325    8800 start.go:340] cluster config:
	{Name:running-upgrade-400000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51212 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-400000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0327 16:41:22.947374    8800 iso.go:125] acquiring lock: {Name:mk26a7f41e004242919e6d5076ec0f9645f820ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:41:22.954677    8800 out.go:177] * Starting "running-upgrade-400000" primary control-plane node in "running-upgrade-400000" cluster
	I0327 16:41:22.958704    8800 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0327 16:41:22.958719    8800 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0327 16:41:22.958725    8800 cache.go:56] Caching tarball of preloaded images
	I0327 16:41:22.958773    8800 preload.go:173] Found /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 16:41:22.958778    8800 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0327 16:41:22.958823    8800 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/running-upgrade-400000/config.json ...
	I0327 16:41:22.959169    8800 start.go:360] acquireMachinesLock for running-upgrade-400000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:41:22.959197    8800 start.go:364] duration metric: took 21.541µs to acquireMachinesLock for "running-upgrade-400000"
	I0327 16:41:22.959207    8800 start.go:96] Skipping create...Using existing machine configuration
	I0327 16:41:22.959210    8800 fix.go:54] fixHost starting: 
	I0327 16:41:22.959912    8800 fix.go:112] recreateIfNeeded on running-upgrade-400000: state=Running err=<nil>
	W0327 16:41:22.959919    8800 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 16:41:22.967704    8800 out.go:177] * Updating the running qemu2 "running-upgrade-400000" VM ...
	I0327 16:41:22.971715    8800 machine.go:94] provisionDockerMachine start ...
	I0327 16:41:22.971746    8800 main.go:141] libmachine: Using SSH client type: native
	I0327 16:41:22.971854    8800 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030f5bf0] 0x1030f8450 <nil>  [] 0s} localhost 51180 <nil> <nil>}
	I0327 16:41:22.971858    8800 main.go:141] libmachine: About to run SSH command:
	hostname
	I0327 16:41:23.032437    8800 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-400000
	
	I0327 16:41:23.032451    8800 buildroot.go:166] provisioning hostname "running-upgrade-400000"
	I0327 16:41:23.032506    8800 main.go:141] libmachine: Using SSH client type: native
	I0327 16:41:23.032609    8800 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030f5bf0] 0x1030f8450 <nil>  [] 0s} localhost 51180 <nil> <nil>}
	I0327 16:41:23.032617    8800 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-400000 && echo "running-upgrade-400000" | sudo tee /etc/hostname
	I0327 16:41:23.091591    8800 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-400000
	
	I0327 16:41:23.091637    8800 main.go:141] libmachine: Using SSH client type: native
	I0327 16:41:23.091736    8800 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030f5bf0] 0x1030f8450 <nil>  [] 0s} localhost 51180 <nil> <nil>}
	I0327 16:41:23.091744    8800 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-400000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-400000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-400000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0327 16:41:23.147583    8800 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0327 16:41:23.147593    8800 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18485-6511/.minikube CaCertPath:/Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18485-6511/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18485-6511/.minikube}
	I0327 16:41:23.147603    8800 buildroot.go:174] setting up certificates
	I0327 16:41:23.147608    8800 provision.go:84] configureAuth start
	I0327 16:41:23.147614    8800 provision.go:143] copyHostCerts
	I0327 16:41:23.147685    8800 exec_runner.go:144] found /Users/jenkins/minikube-integration/18485-6511/.minikube/ca.pem, removing ...
	I0327 16:41:23.147690    8800 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18485-6511/.minikube/ca.pem
	I0327 16:41:23.147808    8800 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18485-6511/.minikube/ca.pem (1078 bytes)
	I0327 16:41:23.147996    8800 exec_runner.go:144] found /Users/jenkins/minikube-integration/18485-6511/.minikube/cert.pem, removing ...
	I0327 16:41:23.148000    8800 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18485-6511/.minikube/cert.pem
	I0327 16:41:23.148061    8800 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18485-6511/.minikube/cert.pem (1123 bytes)
	I0327 16:41:23.148178    8800 exec_runner.go:144] found /Users/jenkins/minikube-integration/18485-6511/.minikube/key.pem, removing ...
	I0327 16:41:23.148182    8800 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18485-6511/.minikube/key.pem
	I0327 16:41:23.148220    8800 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18485-6511/.minikube/key.pem (1675 bytes)
	I0327 16:41:23.148338    8800 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-400000 san=[127.0.0.1 localhost minikube running-upgrade-400000]
	I0327 16:41:23.226410    8800 provision.go:177] copyRemoteCerts
	I0327 16:41:23.226450    8800 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0327 16:41:23.226460    8800 sshutil.go:53] new ssh client: &{IP:localhost Port:51180 SSHKeyPath:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/running-upgrade-400000/id_rsa Username:docker}
	I0327 16:41:23.257603    8800 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0327 16:41:23.264384    8800 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0327 16:41:23.271659    8800 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0327 16:41:23.278773    8800 provision.go:87] duration metric: took 131.159833ms to configureAuth
	I0327 16:41:23.278781    8800 buildroot.go:189] setting minikube options for container-runtime
	I0327 16:41:23.278887    8800 config.go:182] Loaded profile config "running-upgrade-400000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 16:41:23.278917    8800 main.go:141] libmachine: Using SSH client type: native
	I0327 16:41:23.279002    8800 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030f5bf0] 0x1030f8450 <nil>  [] 0s} localhost 51180 <nil> <nil>}
	I0327 16:41:23.279006    8800 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0327 16:41:23.336006    8800 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0327 16:41:23.336017    8800 buildroot.go:70] root file system type: tmpfs
	I0327 16:41:23.336066    8800 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0327 16:41:23.336115    8800 main.go:141] libmachine: Using SSH client type: native
	I0327 16:41:23.336225    8800 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030f5bf0] 0x1030f8450 <nil>  [] 0s} localhost 51180 <nil> <nil>}
	I0327 16:41:23.336259    8800 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0327 16:41:23.393657    8800 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0327 16:41:23.393697    8800 main.go:141] libmachine: Using SSH client type: native
	I0327 16:41:23.393792    8800 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030f5bf0] 0x1030f8450 <nil>  [] 0s} localhost 51180 <nil> <nil>}
	I0327 16:41:23.393800    8800 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0327 16:41:23.450351    8800 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0327 16:41:23.450365    8800 machine.go:97] duration metric: took 478.658542ms to provisionDockerMachine
	I0327 16:41:23.450370    8800 start.go:293] postStartSetup for "running-upgrade-400000" (driver="qemu2")
	I0327 16:41:23.450377    8800 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0327 16:41:23.450428    8800 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0327 16:41:23.450437    8800 sshutil.go:53] new ssh client: &{IP:localhost Port:51180 SSHKeyPath:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/running-upgrade-400000/id_rsa Username:docker}
	I0327 16:41:23.480890    8800 ssh_runner.go:195] Run: cat /etc/os-release
	I0327 16:41:23.482264    8800 info.go:137] Remote host: Buildroot 2021.02.12
	I0327 16:41:23.482271    8800 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18485-6511/.minikube/addons for local assets ...
	I0327 16:41:23.482345    8800 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18485-6511/.minikube/files for local assets ...
	I0327 16:41:23.482427    8800 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18485-6511/.minikube/files/etc/ssl/certs/69262.pem -> 69262.pem in /etc/ssl/certs
	I0327 16:41:23.482512    8800 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0327 16:41:23.484941    8800 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/files/etc/ssl/certs/69262.pem --> /etc/ssl/certs/69262.pem (1708 bytes)
	I0327 16:41:23.491763    8800 start.go:296] duration metric: took 41.389167ms for postStartSetup
	I0327 16:41:23.491775    8800 fix.go:56] duration metric: took 532.581417ms for fixHost
	I0327 16:41:23.491810    8800 main.go:141] libmachine: Using SSH client type: native
	I0327 16:41:23.491901    8800 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1030f5bf0] 0x1030f8450 <nil>  [] 0s} localhost 51180 <nil> <nil>}
	I0327 16:41:23.491906    8800 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0327 16:41:23.546793    8800 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711582883.529389848
	
	I0327 16:41:23.546800    8800 fix.go:216] guest clock: 1711582883.529389848
	I0327 16:41:23.546804    8800 fix.go:229] Guest: 2024-03-27 16:41:23.529389848 -0700 PDT Remote: 2024-03-27 16:41:23.491778 -0700 PDT m=+0.637938084 (delta=37.611848ms)
	I0327 16:41:23.546815    8800 fix.go:200] guest clock delta is within tolerance: 37.611848ms
	I0327 16:41:23.546819    8800 start.go:83] releasing machines lock for "running-upgrade-400000", held for 587.634416ms
	I0327 16:41:23.546880    8800 ssh_runner.go:195] Run: cat /version.json
	I0327 16:41:23.546884    8800 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0327 16:41:23.546888    8800 sshutil.go:53] new ssh client: &{IP:localhost Port:51180 SSHKeyPath:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/running-upgrade-400000/id_rsa Username:docker}
	I0327 16:41:23.546897    8800 sshutil.go:53] new ssh client: &{IP:localhost Port:51180 SSHKeyPath:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/running-upgrade-400000/id_rsa Username:docker}
	W0327 16:41:23.547483    8800 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51180: connect: connection refused
	I0327 16:41:23.547500    8800 retry.go:31] will retry after 253.506937ms: dial tcp [::1]:51180: connect: connection refused
	W0327 16:41:23.833286    8800 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0327 16:41:23.833358    8800 ssh_runner.go:195] Run: systemctl --version
	I0327 16:41:23.835041    8800 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0327 16:41:23.836784    8800 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0327 16:41:23.836807    8800 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0327 16:41:23.839660    8800 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0327 16:41:23.843810    8800 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0327 16:41:23.843817    8800 start.go:494] detecting cgroup driver to use...
	I0327 16:41:23.843926    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0327 16:41:23.849445    8800 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0327 16:41:23.852512    8800 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0327 16:41:23.855897    8800 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0327 16:41:23.855929    8800 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0327 16:41:23.859367    8800 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0327 16:41:23.862331    8800 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0327 16:41:23.865314    8800 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0327 16:41:23.868210    8800 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0327 16:41:23.871475    8800 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0327 16:41:23.875004    8800 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0327 16:41:23.878059    8800 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0327 16:41:23.880976    8800 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0327 16:41:23.883929    8800 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0327 16:41:23.887140    8800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 16:41:23.978711    8800 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0327 16:41:23.986408    8800 start.go:494] detecting cgroup driver to use...
	I0327 16:41:23.986507    8800 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0327 16:41:23.991732    8800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0327 16:41:23.996733    8800 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0327 16:41:24.005213    8800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0327 16:41:24.009807    8800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0327 16:41:24.014257    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0327 16:41:24.020583    8800 ssh_runner.go:195] Run: which cri-dockerd
	I0327 16:41:24.021854    8800 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0327 16:41:24.024440    8800 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0327 16:41:24.029743    8800 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0327 16:41:24.110138    8800 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0327 16:41:24.198021    8800 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0327 16:41:24.198079    8800 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0327 16:41:24.203250    8800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 16:41:24.296897    8800 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0327 16:41:25.827484    8800 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.530617791s)
	I0327 16:41:25.827552    8800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0327 16:41:25.831921    8800 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0327 16:41:25.837879    8800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0327 16:41:25.842634    8800 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0327 16:41:25.924762    8800 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0327 16:41:25.991806    8800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 16:41:26.062897    8800 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0327 16:41:26.068603    8800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0327 16:41:26.073227    8800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 16:41:26.140166    8800 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0327 16:41:26.180323    8800 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0327 16:41:26.180405    8800 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0327 16:41:26.183271    8800 start.go:562] Will wait 60s for crictl version
	I0327 16:41:26.183323    8800 ssh_runner.go:195] Run: which crictl
	I0327 16:41:26.184561    8800 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0327 16:41:26.200720    8800 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0327 16:41:26.200786    8800 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0327 16:41:26.213846    8800 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0327 16:41:26.234144    8800 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0327 16:41:26.234209    8800 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0327 16:41:26.235569    8800 kubeadm.go:877] updating cluster {Name:running-upgrade-400000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51212 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-400000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0327 16:41:26.235610    8800 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0327 16:41:26.235651    8800 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0327 16:41:26.246039    8800 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0327 16:41:26.246047    8800 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0327 16:41:26.246092    8800 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0327 16:41:26.249005    8800 ssh_runner.go:195] Run: which lz4
	I0327 16:41:26.250249    8800 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0327 16:41:26.251455    8800 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0327 16:41:26.251467    8800 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0327 16:41:27.004351    8800 docker.go:649] duration metric: took 754.158584ms to copy over tarball
	I0327 16:41:27.004440    8800 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0327 16:41:28.144595    8800 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.140175458s)
	I0327 16:41:28.144609    8800 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0327 16:41:28.160571    8800 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0327 16:41:28.163975    8800 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0327 16:41:28.169117    8800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 16:41:28.231714    8800 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0327 16:41:29.535830    8800 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.304140333s)
	I0327 16:41:29.535913    8800 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0327 16:41:29.548877    8800 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0327 16:41:29.548888    8800 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0327 16:41:29.548893    8800 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0327 16:41:29.555342    8800 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0327 16:41:29.555539    8800 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0327 16:41:29.555615    8800 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0327 16:41:29.555665    8800 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0327 16:41:29.555791    8800 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0327 16:41:29.555899    8800 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0327 16:41:29.556057    8800 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0327 16:41:29.556131    8800 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 16:41:29.565284    8800 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0327 16:41:29.565365    8800 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0327 16:41:29.565432    8800 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0327 16:41:29.565633    8800 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0327 16:41:29.565681    8800 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0327 16:41:29.566254    8800 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0327 16:41:29.566267    8800 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 16:41:29.566279    8800 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0327 16:41:31.540011    8800 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0327 16:41:31.574567    8800 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0327 16:41:31.574636    8800 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0327 16:41:31.574732    8800 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0327 16:41:31.596934    8800 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	W0327 16:41:31.604225    8800 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0327 16:41:31.604392    8800 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0327 16:41:31.619565    8800 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0327 16:41:31.619593    8800 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0327 16:41:31.619652    8800 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0327 16:41:31.632403    8800 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0327 16:41:31.632528    8800 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0327 16:41:31.634293    8800 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0327 16:41:31.634305    8800 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0327 16:41:31.644788    8800 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0327 16:41:31.666536    8800 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0327 16:41:31.676087    8800 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0327 16:41:31.676100    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0327 16:41:31.676775    8800 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0327 16:41:31.678921    8800 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0327 16:41:31.678942    8800 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0327 16:41:31.678984    8800 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0327 16:41:31.681170    8800 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0327 16:41:31.687890    8800 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0327 16:41:31.704932    8800 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0327 16:41:31.704953    8800 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0327 16:41:31.705005    8800 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0327 16:41:31.750160    8800 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0327 16:41:31.750243    8800 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0327 16:41:31.750260    8800 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0327 16:41:31.750272    8800 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0327 16:41:31.750312    8800 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0327 16:41:31.750342    8800 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0327 16:41:31.750354    8800 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0327 16:41:31.750376    8800 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0327 16:41:31.750398    8800 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0327 16:41:31.750408    8800 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0327 16:41:31.750432    8800 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0327 16:41:31.754361    8800 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0327 16:41:31.772132    8800 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0327 16:41:31.772160    8800 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0327 16:41:31.772189    8800 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0327 16:41:31.772249    8800 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0327 16:41:31.773877    8800 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0327 16:41:31.773886    8800 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0327 16:41:31.781458    8800 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0327 16:41:31.781466    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0327 16:41:31.833615    8800 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W0327 16:41:32.324897    8800 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0327 16:41:32.325525    8800 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 16:41:32.366842    8800 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0327 16:41:32.366910    8800 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 16:41:32.367028    8800 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 16:41:33.646310    8800 ssh_runner.go:235] Completed: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.279285625s)
	I0327 16:41:33.646344    8800 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0327 16:41:33.646606    8800 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0327 16:41:33.651132    8800 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0327 16:41:33.651162    8800 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0327 16:41:33.704880    8800 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0327 16:41:33.704906    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0327 16:41:33.935693    8800 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0327 16:41:33.935729    8800 cache_images.go:92] duration metric: took 4.386961458s to LoadCachedImages
	W0327 16:41:33.935767    8800 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0327 16:41:33.935775    8800 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0327 16:41:33.935827    8800 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-400000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-400000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0327 16:41:33.935887    8800 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0327 16:41:33.952642    8800 cni.go:84] Creating CNI manager for ""
	I0327 16:41:33.952653    8800 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 16:41:33.952657    8800 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0327 16:41:33.952665    8800 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-400000 NodeName:running-upgrade-400000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0327 16:41:33.952734    8800 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-400000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0327 16:41:33.952800    8800 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0327 16:41:33.955495    8800 binaries.go:44] Found k8s binaries, skipping transfer
	I0327 16:41:33.955521    8800 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0327 16:41:33.958276    8800 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0327 16:41:33.963294    8800 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0327 16:41:33.968188    8800 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0327 16:41:33.973303    8800 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0327 16:41:33.974557    8800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 16:41:34.037819    8800 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 16:41:34.043064    8800 certs.go:68] Setting up /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/running-upgrade-400000 for IP: 10.0.2.15
	I0327 16:41:34.043071    8800 certs.go:194] generating shared ca certs ...
	I0327 16:41:34.043079    8800 certs.go:226] acquiring lock for ca certs: {Name:mkc9ab23ce08863badc46de64236358969dc1820 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 16:41:34.043288    8800 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18485-6511/.minikube/ca.key
	I0327 16:41:34.043327    8800 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18485-6511/.minikube/proxy-client-ca.key
	I0327 16:41:34.043331    8800 certs.go:256] generating profile certs ...
	I0327 16:41:34.043389    8800 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/running-upgrade-400000/client.key
	I0327 16:41:34.043402    8800 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/running-upgrade-400000/apiserver.key.3fe3b409
	I0327 16:41:34.043412    8800 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/running-upgrade-400000/apiserver.crt.3fe3b409 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0327 16:41:34.125522    8800 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/running-upgrade-400000/apiserver.crt.3fe3b409 ...
	I0327 16:41:34.125527    8800 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/running-upgrade-400000/apiserver.crt.3fe3b409: {Name:mk0cf1d60c5e0f79ea8841ecc04239a04fe2ef12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 16:41:34.125749    8800 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/running-upgrade-400000/apiserver.key.3fe3b409 ...
	I0327 16:41:34.125753    8800 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/running-upgrade-400000/apiserver.key.3fe3b409: {Name:mk7e037c100016a95c36330306deca1ca64ee80d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 16:41:34.125884    8800 certs.go:381] copying /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/running-upgrade-400000/apiserver.crt.3fe3b409 -> /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/running-upgrade-400000/apiserver.crt
	I0327 16:41:34.126013    8800 certs.go:385] copying /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/running-upgrade-400000/apiserver.key.3fe3b409 -> /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/running-upgrade-400000/apiserver.key
	I0327 16:41:34.126127    8800 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/running-upgrade-400000/proxy-client.key
	I0327 16:41:34.126235    8800 certs.go:484] found cert: /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/6926.pem (1338 bytes)
	W0327 16:41:34.126258    8800 certs.go:480] ignoring /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/6926_empty.pem, impossibly tiny 0 bytes
	I0327 16:41:34.126262    8800 certs.go:484] found cert: /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca-key.pem (1679 bytes)
	I0327 16:41:34.126279    8800 certs.go:484] found cert: /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem (1078 bytes)
	I0327 16:41:34.126295    8800 certs.go:484] found cert: /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem (1123 bytes)
	I0327 16:41:34.126310    8800 certs.go:484] found cert: /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/key.pem (1675 bytes)
	I0327 16:41:34.126348    8800 certs.go:484] found cert: /Users/jenkins/minikube-integration/18485-6511/.minikube/files/etc/ssl/certs/69262.pem (1708 bytes)
	I0327 16:41:34.126667    8800 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0327 16:41:34.133968    8800 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0327 16:41:34.141217    8800 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0327 16:41:34.148642    8800 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0327 16:41:34.155859    8800 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/running-upgrade-400000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0327 16:41:34.162404    8800 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/running-upgrade-400000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0327 16:41:34.169039    8800 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/running-upgrade-400000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0327 16:41:34.176508    8800 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/running-upgrade-400000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0327 16:41:34.183978    8800 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0327 16:41:34.190611    8800 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/6926.pem --> /usr/share/ca-certificates/6926.pem (1338 bytes)
	I0327 16:41:34.197008    8800 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/files/etc/ssl/certs/69262.pem --> /usr/share/ca-certificates/69262.pem (1708 bytes)
	I0327 16:41:34.204423    8800 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0327 16:41:34.209477    8800 ssh_runner.go:195] Run: openssl version
	I0327 16:41:34.211422    8800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0327 16:41:34.214521    8800 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0327 16:41:34.215989    8800 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:41 /usr/share/ca-certificates/minikubeCA.pem
	I0327 16:41:34.216012    8800 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0327 16:41:34.218971    8800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0327 16:41:34.221652    8800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6926.pem && ln -fs /usr/share/ca-certificates/6926.pem /etc/ssl/certs/6926.pem"
	I0327 16:41:34.224884    8800 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6926.pem
	I0327 16:41:34.226510    8800 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:28 /usr/share/ca-certificates/6926.pem
	I0327 16:41:34.226528    8800 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6926.pem
	I0327 16:41:34.228439    8800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6926.pem /etc/ssl/certs/51391683.0"
	I0327 16:41:34.231591    8800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69262.pem && ln -fs /usr/share/ca-certificates/69262.pem /etc/ssl/certs/69262.pem"
	I0327 16:41:34.234437    8800 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/69262.pem
	I0327 16:41:34.235700    8800 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:28 /usr/share/ca-certificates/69262.pem
	I0327 16:41:34.235716    8800 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69262.pem
	I0327 16:41:34.237459    8800 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69262.pem /etc/ssl/certs/3ec20f2e.0"
	I0327 16:41:34.240421    8800 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0327 16:41:34.241905    8800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0327 16:41:34.243588    8800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0327 16:41:34.245381    8800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0327 16:41:34.247123    8800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0327 16:41:34.249087    8800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0327 16:41:34.250802    8800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0327 16:41:34.252466    8800 kubeadm.go:391] StartCluster: {Name:running-upgrade-400000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51212 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-400000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0327 16:41:34.252539    8800 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0327 16:41:34.262666    8800 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0327 16:41:34.265990    8800 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0327 16:41:34.265996    8800 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0327 16:41:34.265998    8800 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0327 16:41:34.266022    8800 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0327 16:41:34.269308    8800 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0327 16:41:34.269346    8800 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-400000" does not appear in /Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:41:34.269366    8800 kubeconfig.go:62] /Users/jenkins/minikube-integration/18485-6511/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-400000" cluster setting kubeconfig missing "running-upgrade-400000" context setting]
	I0327 16:41:34.269547    8800 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18485-6511/kubeconfig: {Name:mke46d0809919cfbe0118c5110926d6ce61bf373 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 16:41:34.270206    8800 kapi.go:59] client config for running-upgrade-400000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/running-upgrade-400000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/running-upgrade-400000/client.key", CAFile:"/Users/jenkins/minikube-integration/18485-6511/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1043e6c70), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0327 16:41:34.270976    8800 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0327 16:41:34.273758    8800 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-400000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0327 16:41:34.273768    8800 kubeadm.go:1154] stopping kube-system containers ...
	I0327 16:41:34.273807    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0327 16:41:34.284706    8800 docker.go:483] Stopping containers: [8e4d5ff81962 5c5348743926 7c9a46b98166 cfcee5837028 c0c992e34b5d d5790023049d 1a8c10b8da56 126b9553cb40 5429db75cd75 b8be2b66526c 180a99820363 e666dad663ca]
	I0327 16:41:34.284770    8800 ssh_runner.go:195] Run: docker stop 8e4d5ff81962 5c5348743926 7c9a46b98166 cfcee5837028 c0c992e34b5d d5790023049d 1a8c10b8da56 126b9553cb40 5429db75cd75 b8be2b66526c 180a99820363 e666dad663ca
	I0327 16:41:34.295709    8800 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0327 16:41:34.384746    8800 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0327 16:41:34.388843    8800 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5643 Mar 27 23:41 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Mar 27 23:41 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Mar 27 23:41 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Mar 27 23:41 /etc/kubernetes/scheduler.conf
	
	I0327 16:41:34.388876    8800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51212 /etc/kubernetes/admin.conf
	I0327 16:41:34.392188    8800 kubeadm.go:162] "https://control-plane.minikube.internal:51212" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51212 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0327 16:41:34.392217    8800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0327 16:41:34.395535    8800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51212 /etc/kubernetes/kubelet.conf
	I0327 16:41:34.398850    8800 kubeadm.go:162] "https://control-plane.minikube.internal:51212" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51212 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0327 16:41:34.398881    8800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0327 16:41:34.402077    8800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51212 /etc/kubernetes/controller-manager.conf
	I0327 16:41:34.404671    8800 kubeadm.go:162] "https://control-plane.minikube.internal:51212" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51212 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0327 16:41:34.404691    8800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0327 16:41:34.407347    8800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51212 /etc/kubernetes/scheduler.conf
	I0327 16:41:34.410128    8800 kubeadm.go:162] "https://control-plane.minikube.internal:51212" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51212 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0327 16:41:34.410149    8800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0327 16:41:34.412685    8800 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0327 16:41:34.415348    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0327 16:41:34.446822    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0327 16:41:34.868505    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0327 16:41:35.050430    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0327 16:41:35.073399    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0327 16:41:35.093049    8800 api_server.go:52] waiting for apiserver process to appear ...
	I0327 16:41:35.093126    8800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 16:41:35.595426    8800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 16:41:36.095192    8800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 16:41:36.595161    8800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 16:41:36.599322    8800 api_server.go:72] duration metric: took 1.506320541s to wait for apiserver process to appear ...
	I0327 16:41:36.599334    8800 api_server.go:88] waiting for apiserver healthz status ...
	I0327 16:41:36.599358    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:41:41.601547    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:41:41.601637    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:41:46.602357    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:41:46.602443    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:41:51.603177    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:41:51.603246    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:41:56.603811    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:41:56.603891    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:42:01.605367    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:42:01.605458    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:42:06.607149    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:42:06.607243    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:42:11.609497    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:42:11.609557    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:42:16.610380    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:42:16.610453    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:42:21.612805    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:42:21.612831    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:42:26.615000    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:42:26.615077    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:42:31.617546    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:42:31.617629    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:42:36.620080    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:42:36.620593    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:42:36.664385    8800 logs.go:276] 2 containers: [0703d422a652 126b9553cb40]
	I0327 16:42:36.664517    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:42:36.683310    8800 logs.go:276] 2 containers: [4f67bfb30b49 5429db75cd75]
	I0327 16:42:36.683420    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:42:36.697615    8800 logs.go:276] 1 containers: [2ef14eacffaa]
	I0327 16:42:36.697693    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:42:36.709783    8800 logs.go:276] 2 containers: [b68fec8d8767 c0c992e34b5d]
	I0327 16:42:36.709860    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:42:36.720606    8800 logs.go:276] 1 containers: [2bfb39e97e1d]
	I0327 16:42:36.720675    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:42:36.730921    8800 logs.go:276] 2 containers: [385805912c59 1a8c10b8da56]
	I0327 16:42:36.730982    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:42:36.740777    8800 logs.go:276] 0 containers: []
	W0327 16:42:36.740788    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:42:36.740845    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:42:36.751326    8800 logs.go:276] 2 containers: [b4f5d0ba182b 579fe35ef076]
	I0327 16:42:36.751354    8800 logs.go:123] Gathering logs for kube-apiserver [0703d422a652] ...
	I0327 16:42:36.751359    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0703d422a652"
	I0327 16:42:36.765314    8800 logs.go:123] Gathering logs for kube-scheduler [b68fec8d8767] ...
	I0327 16:42:36.765326    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68fec8d8767"
	I0327 16:42:36.777024    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:42:36.777036    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:42:36.811866    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:42:36.811874    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:42:36.815936    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:42:36.815942    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:42:36.841763    8800 logs.go:123] Gathering logs for kube-proxy [2bfb39e97e1d] ...
	I0327 16:42:36.841778    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bfb39e97e1d"
	I0327 16:42:36.853826    8800 logs.go:123] Gathering logs for kube-controller-manager [385805912c59] ...
	I0327 16:42:36.853837    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385805912c59"
	I0327 16:42:36.870614    8800 logs.go:123] Gathering logs for kube-scheduler [c0c992e34b5d] ...
	I0327 16:42:36.870624    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0c992e34b5d"
	I0327 16:42:36.886313    8800 logs.go:123] Gathering logs for storage-provisioner [579fe35ef076] ...
	I0327 16:42:36.886322    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 579fe35ef076"
	I0327 16:42:36.897541    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:42:36.897552    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:42:36.911184    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:42:36.911196    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:42:36.986295    8800 logs.go:123] Gathering logs for coredns [2ef14eacffaa] ...
	I0327 16:42:36.986308    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef14eacffaa"
	I0327 16:42:36.997791    8800 logs.go:123] Gathering logs for etcd [5429db75cd75] ...
	I0327 16:42:36.997805    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5429db75cd75"
	I0327 16:42:37.012742    8800 logs.go:123] Gathering logs for kube-controller-manager [1a8c10b8da56] ...
	I0327 16:42:37.012754    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a8c10b8da56"
	I0327 16:42:37.028797    8800 logs.go:123] Gathering logs for storage-provisioner [b4f5d0ba182b] ...
	I0327 16:42:37.028809    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f5d0ba182b"
	I0327 16:42:37.040342    8800 logs.go:123] Gathering logs for kube-apiserver [126b9553cb40] ...
	I0327 16:42:37.040354    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126b9553cb40"
	I0327 16:42:37.060506    8800 logs.go:123] Gathering logs for etcd [4f67bfb30b49] ...
	I0327 16:42:37.060516    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f67bfb30b49"
	I0327 16:42:39.576166    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:42:44.578881    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:42:44.579323    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:42:44.622215    8800 logs.go:276] 2 containers: [0703d422a652 126b9553cb40]
	I0327 16:42:44.622354    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:42:44.643111    8800 logs.go:276] 2 containers: [4f67bfb30b49 5429db75cd75]
	I0327 16:42:44.643222    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:42:44.658509    8800 logs.go:276] 1 containers: [2ef14eacffaa]
	I0327 16:42:44.658607    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:42:44.670622    8800 logs.go:276] 2 containers: [b68fec8d8767 c0c992e34b5d]
	I0327 16:42:44.670693    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:42:44.681345    8800 logs.go:276] 1 containers: [2bfb39e97e1d]
	I0327 16:42:44.681404    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:42:44.691809    8800 logs.go:276] 2 containers: [385805912c59 1a8c10b8da56]
	I0327 16:42:44.691875    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:42:44.701855    8800 logs.go:276] 0 containers: []
	W0327 16:42:44.701865    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:42:44.701913    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:42:44.712258    8800 logs.go:276] 2 containers: [b4f5d0ba182b 579fe35ef076]
	I0327 16:42:44.712275    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:42:44.712280    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:42:44.746662    8800 logs.go:123] Gathering logs for etcd [5429db75cd75] ...
	I0327 16:42:44.746669    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5429db75cd75"
	I0327 16:42:44.761182    8800 logs.go:123] Gathering logs for kube-scheduler [b68fec8d8767] ...
	I0327 16:42:44.761192    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68fec8d8767"
	I0327 16:42:44.772576    8800 logs.go:123] Gathering logs for kube-controller-manager [1a8c10b8da56] ...
	I0327 16:42:44.772585    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a8c10b8da56"
	I0327 16:42:44.787273    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:42:44.787282    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:42:44.799335    8800 logs.go:123] Gathering logs for kube-apiserver [0703d422a652] ...
	I0327 16:42:44.799344    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0703d422a652"
	I0327 16:42:44.816661    8800 logs.go:123] Gathering logs for kube-apiserver [126b9553cb40] ...
	I0327 16:42:44.816672    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126b9553cb40"
	I0327 16:42:44.835383    8800 logs.go:123] Gathering logs for etcd [4f67bfb30b49] ...
	I0327 16:42:44.835393    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f67bfb30b49"
	I0327 16:42:44.848832    8800 logs.go:123] Gathering logs for coredns [2ef14eacffaa] ...
	I0327 16:42:44.848842    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef14eacffaa"
	I0327 16:42:44.860697    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:42:44.860707    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:42:44.865101    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:42:44.865110    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:42:44.900273    8800 logs.go:123] Gathering logs for kube-scheduler [c0c992e34b5d] ...
	I0327 16:42:44.900282    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0c992e34b5d"
	I0327 16:42:44.916089    8800 logs.go:123] Gathering logs for kube-proxy [2bfb39e97e1d] ...
	I0327 16:42:44.916101    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bfb39e97e1d"
	I0327 16:42:44.927723    8800 logs.go:123] Gathering logs for kube-controller-manager [385805912c59] ...
	I0327 16:42:44.927736    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385805912c59"
	I0327 16:42:44.944970    8800 logs.go:123] Gathering logs for storage-provisioner [b4f5d0ba182b] ...
	I0327 16:42:44.944980    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f5d0ba182b"
	I0327 16:42:44.956458    8800 logs.go:123] Gathering logs for storage-provisioner [579fe35ef076] ...
	I0327 16:42:44.956472    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 579fe35ef076"
	I0327 16:42:44.967542    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:42:44.967553    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:42:47.495339    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:42:52.496715    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:42:52.497065    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:42:52.533225    8800 logs.go:276] 2 containers: [0703d422a652 126b9553cb40]
	I0327 16:42:52.533354    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:42:52.552783    8800 logs.go:276] 2 containers: [4f67bfb30b49 5429db75cd75]
	I0327 16:42:52.552877    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:42:52.567537    8800 logs.go:276] 1 containers: [2ef14eacffaa]
	I0327 16:42:52.567624    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:42:52.579020    8800 logs.go:276] 2 containers: [b68fec8d8767 c0c992e34b5d]
	I0327 16:42:52.579083    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:42:52.589576    8800 logs.go:276] 1 containers: [2bfb39e97e1d]
	I0327 16:42:52.589646    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:42:52.600355    8800 logs.go:276] 2 containers: [385805912c59 1a8c10b8da56]
	I0327 16:42:52.600423    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:42:52.610783    8800 logs.go:276] 0 containers: []
	W0327 16:42:52.610795    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:42:52.610851    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:42:52.621014    8800 logs.go:276] 2 containers: [b4f5d0ba182b 579fe35ef076]
	I0327 16:42:52.621039    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:42:52.621043    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:42:52.656912    8800 logs.go:123] Gathering logs for kube-apiserver [0703d422a652] ...
	I0327 16:42:52.656927    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0703d422a652"
	I0327 16:42:52.671277    8800 logs.go:123] Gathering logs for coredns [2ef14eacffaa] ...
	I0327 16:42:52.671289    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef14eacffaa"
	I0327 16:42:52.682751    8800 logs.go:123] Gathering logs for kube-scheduler [c0c992e34b5d] ...
	I0327 16:42:52.682762    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0c992e34b5d"
	I0327 16:42:52.698379    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:42:52.698390    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:42:52.702794    8800 logs.go:123] Gathering logs for kube-apiserver [126b9553cb40] ...
	I0327 16:42:52.702802    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126b9553cb40"
	I0327 16:42:52.721166    8800 logs.go:123] Gathering logs for etcd [5429db75cd75] ...
	I0327 16:42:52.721177    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5429db75cd75"
	I0327 16:42:52.735648    8800 logs.go:123] Gathering logs for kube-controller-manager [1a8c10b8da56] ...
	I0327 16:42:52.735661    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a8c10b8da56"
	I0327 16:42:52.749863    8800 logs.go:123] Gathering logs for storage-provisioner [b4f5d0ba182b] ...
	I0327 16:42:52.749872    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f5d0ba182b"
	I0327 16:42:52.761891    8800 logs.go:123] Gathering logs for etcd [4f67bfb30b49] ...
	I0327 16:42:52.761901    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f67bfb30b49"
	I0327 16:42:52.779855    8800 logs.go:123] Gathering logs for kube-scheduler [b68fec8d8767] ...
	I0327 16:42:52.779865    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68fec8d8767"
	I0327 16:42:52.791581    8800 logs.go:123] Gathering logs for kube-controller-manager [385805912c59] ...
	I0327 16:42:52.791593    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385805912c59"
	I0327 16:42:52.808809    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:42:52.808821    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:42:52.845899    8800 logs.go:123] Gathering logs for kube-proxy [2bfb39e97e1d] ...
	I0327 16:42:52.845909    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bfb39e97e1d"
	I0327 16:42:52.857503    8800 logs.go:123] Gathering logs for storage-provisioner [579fe35ef076] ...
	I0327 16:42:52.857517    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 579fe35ef076"
	I0327 16:42:52.868587    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:42:52.868597    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:42:52.893251    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:42:52.893258    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:42:55.406635    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:43:00.409305    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:43:00.409735    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:43:00.459560    8800 logs.go:276] 2 containers: [0703d422a652 126b9553cb40]
	I0327 16:43:00.459665    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:43:00.483640    8800 logs.go:276] 2 containers: [4f67bfb30b49 5429db75cd75]
	I0327 16:43:00.483706    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:43:00.503056    8800 logs.go:276] 1 containers: [2ef14eacffaa]
	I0327 16:43:00.503130    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:43:00.519390    8800 logs.go:276] 2 containers: [b68fec8d8767 c0c992e34b5d]
	I0327 16:43:00.519470    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:43:00.533415    8800 logs.go:276] 1 containers: [2bfb39e97e1d]
	I0327 16:43:00.533492    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:43:00.544516    8800 logs.go:276] 2 containers: [385805912c59 1a8c10b8da56]
	I0327 16:43:00.544586    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:43:00.555047    8800 logs.go:276] 0 containers: []
	W0327 16:43:00.555059    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:43:00.555120    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:43:00.567303    8800 logs.go:276] 2 containers: [b4f5d0ba182b 579fe35ef076]
	I0327 16:43:00.567324    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:43:00.567329    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:43:00.604150    8800 logs.go:123] Gathering logs for kube-scheduler [b68fec8d8767] ...
	I0327 16:43:00.604162    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68fec8d8767"
	I0327 16:43:00.616906    8800 logs.go:123] Gathering logs for kube-proxy [2bfb39e97e1d] ...
	I0327 16:43:00.616918    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bfb39e97e1d"
	I0327 16:43:00.628208    8800 logs.go:123] Gathering logs for kube-controller-manager [385805912c59] ...
	I0327 16:43:00.628218    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385805912c59"
	I0327 16:43:00.645884    8800 logs.go:123] Gathering logs for storage-provisioner [579fe35ef076] ...
	I0327 16:43:00.645896    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 579fe35ef076"
	I0327 16:43:00.657768    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:43:00.657778    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:43:00.662447    8800 logs.go:123] Gathering logs for kube-apiserver [0703d422a652] ...
	I0327 16:43:00.662453    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0703d422a652"
	I0327 16:43:00.679979    8800 logs.go:123] Gathering logs for kube-apiserver [126b9553cb40] ...
	I0327 16:43:00.679990    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126b9553cb40"
	I0327 16:43:00.700882    8800 logs.go:123] Gathering logs for kube-controller-manager [1a8c10b8da56] ...
	I0327 16:43:00.700891    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a8c10b8da56"
	I0327 16:43:00.715664    8800 logs.go:123] Gathering logs for storage-provisioner [b4f5d0ba182b] ...
	I0327 16:43:00.715672    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f5d0ba182b"
	I0327 16:43:00.726709    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:43:00.726719    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:43:00.761398    8800 logs.go:123] Gathering logs for etcd [4f67bfb30b49] ...
	I0327 16:43:00.761405    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f67bfb30b49"
	I0327 16:43:00.775073    8800 logs.go:123] Gathering logs for coredns [2ef14eacffaa] ...
	I0327 16:43:00.775082    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef14eacffaa"
	I0327 16:43:00.786622    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:43:00.786634    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:43:00.810671    8800 logs.go:123] Gathering logs for etcd [5429db75cd75] ...
	I0327 16:43:00.810677    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5429db75cd75"
	I0327 16:43:00.824984    8800 logs.go:123] Gathering logs for kube-scheduler [c0c992e34b5d] ...
	I0327 16:43:00.824993    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0c992e34b5d"
	I0327 16:43:00.840305    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:43:00.840317    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:43:03.354811    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:43:08.357334    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:43:08.357744    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:43:08.389500    8800 logs.go:276] 2 containers: [0703d422a652 126b9553cb40]
	I0327 16:43:08.389619    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:43:08.409623    8800 logs.go:276] 2 containers: [4f67bfb30b49 5429db75cd75]
	I0327 16:43:08.409724    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:43:08.423275    8800 logs.go:276] 1 containers: [2ef14eacffaa]
	I0327 16:43:08.423346    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:43:08.435289    8800 logs.go:276] 2 containers: [b68fec8d8767 c0c992e34b5d]
	I0327 16:43:08.435355    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:43:08.445307    8800 logs.go:276] 1 containers: [2bfb39e97e1d]
	I0327 16:43:08.445378    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:43:08.456027    8800 logs.go:276] 2 containers: [385805912c59 1a8c10b8da56]
	I0327 16:43:08.456089    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:43:08.466187    8800 logs.go:276] 0 containers: []
	W0327 16:43:08.466196    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:43:08.466246    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:43:08.476021    8800 logs.go:276] 2 containers: [b4f5d0ba182b 579fe35ef076]
	I0327 16:43:08.476039    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:43:08.476044    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:43:08.512868    8800 logs.go:123] Gathering logs for kube-scheduler [c0c992e34b5d] ...
	I0327 16:43:08.512881    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0c992e34b5d"
	I0327 16:43:08.528590    8800 logs.go:123] Gathering logs for storage-provisioner [b4f5d0ba182b] ...
	I0327 16:43:08.528601    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f5d0ba182b"
	I0327 16:43:08.540173    8800 logs.go:123] Gathering logs for kube-apiserver [126b9553cb40] ...
	I0327 16:43:08.540187    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126b9553cb40"
	I0327 16:43:08.559130    8800 logs.go:123] Gathering logs for etcd [4f67bfb30b49] ...
	I0327 16:43:08.559142    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f67bfb30b49"
	I0327 16:43:08.572311    8800 logs.go:123] Gathering logs for kube-scheduler [b68fec8d8767] ...
	I0327 16:43:08.572321    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68fec8d8767"
	I0327 16:43:08.583806    8800 logs.go:123] Gathering logs for kube-controller-manager [1a8c10b8da56] ...
	I0327 16:43:08.583817    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a8c10b8da56"
	I0327 16:43:08.598552    8800 logs.go:123] Gathering logs for storage-provisioner [579fe35ef076] ...
	I0327 16:43:08.598564    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 579fe35ef076"
	I0327 16:43:08.609644    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:43:08.609656    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:43:08.626661    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:43:08.626672    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:43:08.663043    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:43:08.663054    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:43:08.667263    8800 logs.go:123] Gathering logs for coredns [2ef14eacffaa] ...
	I0327 16:43:08.667269    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef14eacffaa"
	I0327 16:43:08.678020    8800 logs.go:123] Gathering logs for kube-proxy [2bfb39e97e1d] ...
	I0327 16:43:08.678031    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bfb39e97e1d"
	I0327 16:43:08.689778    8800 logs.go:123] Gathering logs for kube-controller-manager [385805912c59] ...
	I0327 16:43:08.689789    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385805912c59"
	I0327 16:43:08.706848    8800 logs.go:123] Gathering logs for kube-apiserver [0703d422a652] ...
	I0327 16:43:08.706859    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0703d422a652"
	I0327 16:43:08.720897    8800 logs.go:123] Gathering logs for etcd [5429db75cd75] ...
	I0327 16:43:08.720909    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5429db75cd75"
	I0327 16:43:08.744829    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:43:08.744841    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:43:11.273073    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:43:16.275700    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:43:16.276189    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:43:16.314458    8800 logs.go:276] 2 containers: [0703d422a652 126b9553cb40]
	I0327 16:43:16.314589    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:43:16.335398    8800 logs.go:276] 2 containers: [4f67bfb30b49 5429db75cd75]
	I0327 16:43:16.335508    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:43:16.351295    8800 logs.go:276] 1 containers: [2ef14eacffaa]
	I0327 16:43:16.351373    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:43:16.364315    8800 logs.go:276] 2 containers: [b68fec8d8767 c0c992e34b5d]
	I0327 16:43:16.364384    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:43:16.374752    8800 logs.go:276] 1 containers: [2bfb39e97e1d]
	I0327 16:43:16.374813    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:43:16.384834    8800 logs.go:276] 2 containers: [385805912c59 1a8c10b8da56]
	I0327 16:43:16.384902    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:43:16.395008    8800 logs.go:276] 0 containers: []
	W0327 16:43:16.395017    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:43:16.395069    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:43:16.411721    8800 logs.go:276] 2 containers: [b4f5d0ba182b 579fe35ef076]
	I0327 16:43:16.411741    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:43:16.411747    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:43:16.448722    8800 logs.go:123] Gathering logs for kube-apiserver [0703d422a652] ...
	I0327 16:43:16.448735    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0703d422a652"
	I0327 16:43:16.462581    8800 logs.go:123] Gathering logs for kube-scheduler [c0c992e34b5d] ...
	I0327 16:43:16.462596    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0c992e34b5d"
	I0327 16:43:16.477680    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:43:16.477689    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:43:16.503039    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:43:16.503045    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:43:16.538726    8800 logs.go:123] Gathering logs for etcd [4f67bfb30b49] ...
	I0327 16:43:16.538735    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f67bfb30b49"
	I0327 16:43:16.553314    8800 logs.go:123] Gathering logs for etcd [5429db75cd75] ...
	I0327 16:43:16.553323    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5429db75cd75"
	I0327 16:43:16.568167    8800 logs.go:123] Gathering logs for coredns [2ef14eacffaa] ...
	I0327 16:43:16.568178    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef14eacffaa"
	I0327 16:43:16.579341    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:43:16.579351    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:43:16.591306    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:43:16.591315    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:43:16.595705    8800 logs.go:123] Gathering logs for kube-scheduler [b68fec8d8767] ...
	I0327 16:43:16.595713    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68fec8d8767"
	I0327 16:43:16.607315    8800 logs.go:123] Gathering logs for kube-proxy [2bfb39e97e1d] ...
	I0327 16:43:16.607324    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bfb39e97e1d"
	I0327 16:43:16.618876    8800 logs.go:123] Gathering logs for storage-provisioner [b4f5d0ba182b] ...
	I0327 16:43:16.618885    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f5d0ba182b"
	I0327 16:43:16.631000    8800 logs.go:123] Gathering logs for storage-provisioner [579fe35ef076] ...
	I0327 16:43:16.631013    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 579fe35ef076"
	I0327 16:43:16.643540    8800 logs.go:123] Gathering logs for kube-apiserver [126b9553cb40] ...
	I0327 16:43:16.643550    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126b9553cb40"
	I0327 16:43:16.662878    8800 logs.go:123] Gathering logs for kube-controller-manager [385805912c59] ...
	I0327 16:43:16.662891    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385805912c59"
	I0327 16:43:16.681949    8800 logs.go:123] Gathering logs for kube-controller-manager [1a8c10b8da56] ...
	I0327 16:43:16.681963    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a8c10b8da56"
	I0327 16:43:19.201947    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:43:24.204265    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:43:24.204738    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:43:24.245395    8800 logs.go:276] 2 containers: [0703d422a652 126b9553cb40]
	I0327 16:43:24.245594    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:43:24.266868    8800 logs.go:276] 2 containers: [4f67bfb30b49 5429db75cd75]
	I0327 16:43:24.266965    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:43:24.281409    8800 logs.go:276] 1 containers: [2ef14eacffaa]
	I0327 16:43:24.281483    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:43:24.295650    8800 logs.go:276] 2 containers: [b68fec8d8767 c0c992e34b5d]
	I0327 16:43:24.295718    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:43:24.305939    8800 logs.go:276] 1 containers: [2bfb39e97e1d]
	I0327 16:43:24.306008    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:43:24.316709    8800 logs.go:276] 2 containers: [385805912c59 1a8c10b8da56]
	I0327 16:43:24.316769    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:43:24.327253    8800 logs.go:276] 0 containers: []
	W0327 16:43:24.327262    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:43:24.327313    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:43:24.338164    8800 logs.go:276] 2 containers: [b4f5d0ba182b 579fe35ef076]
	I0327 16:43:24.338182    8800 logs.go:123] Gathering logs for kube-apiserver [0703d422a652] ...
	I0327 16:43:24.338187    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0703d422a652"
	I0327 16:43:24.352601    8800 logs.go:123] Gathering logs for etcd [4f67bfb30b49] ...
	I0327 16:43:24.352612    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f67bfb30b49"
	I0327 16:43:24.366657    8800 logs.go:123] Gathering logs for kube-scheduler [b68fec8d8767] ...
	I0327 16:43:24.366670    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68fec8d8767"
	I0327 16:43:24.383008    8800 logs.go:123] Gathering logs for kube-controller-manager [385805912c59] ...
	I0327 16:43:24.383021    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385805912c59"
	I0327 16:43:24.401550    8800 logs.go:123] Gathering logs for kube-apiserver [126b9553cb40] ...
	I0327 16:43:24.401563    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126b9553cb40"
	I0327 16:43:24.420987    8800 logs.go:123] Gathering logs for etcd [5429db75cd75] ...
	I0327 16:43:24.421014    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5429db75cd75"
	I0327 16:43:24.435338    8800 logs.go:123] Gathering logs for kube-scheduler [c0c992e34b5d] ...
	I0327 16:43:24.435347    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0c992e34b5d"
	I0327 16:43:24.450878    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:43:24.450890    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:43:24.474382    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:43:24.474388    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:43:24.509456    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:43:24.509464    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:43:24.513524    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:43:24.513533    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:43:24.548933    8800 logs.go:123] Gathering logs for storage-provisioner [b4f5d0ba182b] ...
	I0327 16:43:24.548948    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f5d0ba182b"
	I0327 16:43:24.560387    8800 logs.go:123] Gathering logs for coredns [2ef14eacffaa] ...
	I0327 16:43:24.560397    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef14eacffaa"
	I0327 16:43:24.571316    8800 logs.go:123] Gathering logs for kube-proxy [2bfb39e97e1d] ...
	I0327 16:43:24.571328    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bfb39e97e1d"
	I0327 16:43:24.582684    8800 logs.go:123] Gathering logs for kube-controller-manager [1a8c10b8da56] ...
	I0327 16:43:24.582694    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a8c10b8da56"
	I0327 16:43:24.597750    8800 logs.go:123] Gathering logs for storage-provisioner [579fe35ef076] ...
	I0327 16:43:24.597760    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 579fe35ef076"
	I0327 16:43:24.609509    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:43:24.609523    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:43:27.124068    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:43:32.126493    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:43:32.126641    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:43:32.138142    8800 logs.go:276] 2 containers: [0703d422a652 126b9553cb40]
	I0327 16:43:32.138219    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:43:32.148576    8800 logs.go:276] 2 containers: [4f67bfb30b49 5429db75cd75]
	I0327 16:43:32.148650    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:43:32.160778    8800 logs.go:276] 1 containers: [2ef14eacffaa]
	I0327 16:43:32.160855    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:43:32.180892    8800 logs.go:276] 2 containers: [b68fec8d8767 c0c992e34b5d]
	I0327 16:43:32.180957    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:43:32.192067    8800 logs.go:276] 1 containers: [2bfb39e97e1d]
	I0327 16:43:32.192138    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:43:32.203509    8800 logs.go:276] 2 containers: [385805912c59 1a8c10b8da56]
	I0327 16:43:32.203582    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:43:32.213315    8800 logs.go:276] 0 containers: []
	W0327 16:43:32.213326    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:43:32.213385    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:43:32.224244    8800 logs.go:276] 2 containers: [b4f5d0ba182b 579fe35ef076]
	I0327 16:43:32.224262    8800 logs.go:123] Gathering logs for etcd [5429db75cd75] ...
	I0327 16:43:32.224267    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5429db75cd75"
	I0327 16:43:32.239041    8800 logs.go:123] Gathering logs for kube-scheduler [c0c992e34b5d] ...
	I0327 16:43:32.239051    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0c992e34b5d"
	I0327 16:43:32.254715    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:43:32.254726    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:43:32.279864    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:43:32.279870    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:43:32.316492    8800 logs.go:123] Gathering logs for etcd [4f67bfb30b49] ...
	I0327 16:43:32.316500    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f67bfb30b49"
	I0327 16:43:32.334330    8800 logs.go:123] Gathering logs for kube-scheduler [b68fec8d8767] ...
	I0327 16:43:32.334341    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68fec8d8767"
	I0327 16:43:32.346677    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:43:32.346687    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:43:32.351022    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:43:32.351027    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:43:32.387042    8800 logs.go:123] Gathering logs for coredns [2ef14eacffaa] ...
	I0327 16:43:32.387054    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef14eacffaa"
	I0327 16:43:32.399045    8800 logs.go:123] Gathering logs for kube-controller-manager [1a8c10b8da56] ...
	I0327 16:43:32.399057    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a8c10b8da56"
	I0327 16:43:32.413882    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:43:32.413893    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:43:32.426449    8800 logs.go:123] Gathering logs for storage-provisioner [579fe35ef076] ...
	I0327 16:43:32.426459    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 579fe35ef076"
	I0327 16:43:32.438176    8800 logs.go:123] Gathering logs for kube-apiserver [0703d422a652] ...
	I0327 16:43:32.438188    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0703d422a652"
	I0327 16:43:32.451982    8800 logs.go:123] Gathering logs for kube-apiserver [126b9553cb40] ...
	I0327 16:43:32.451991    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126b9553cb40"
	I0327 16:43:32.471371    8800 logs.go:123] Gathering logs for kube-proxy [2bfb39e97e1d] ...
	I0327 16:43:32.471381    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bfb39e97e1d"
	I0327 16:43:32.483151    8800 logs.go:123] Gathering logs for kube-controller-manager [385805912c59] ...
	I0327 16:43:32.483161    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385805912c59"
	I0327 16:43:32.501412    8800 logs.go:123] Gathering logs for storage-provisioner [b4f5d0ba182b] ...
	I0327 16:43:32.501421    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f5d0ba182b"
	I0327 16:43:35.015339    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:43:40.017524    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:43:40.018038    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:43:40.056436    8800 logs.go:276] 2 containers: [0703d422a652 126b9553cb40]
	I0327 16:43:40.056580    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:43:40.078502    8800 logs.go:276] 2 containers: [4f67bfb30b49 5429db75cd75]
	I0327 16:43:40.078620    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:43:40.093913    8800 logs.go:276] 1 containers: [2ef14eacffaa]
	I0327 16:43:40.093992    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:43:40.106344    8800 logs.go:276] 2 containers: [b68fec8d8767 c0c992e34b5d]
	I0327 16:43:40.106416    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:43:40.117189    8800 logs.go:276] 1 containers: [2bfb39e97e1d]
	I0327 16:43:40.117246    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:43:40.129449    8800 logs.go:276] 2 containers: [385805912c59 1a8c10b8da56]
	I0327 16:43:40.129510    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:43:40.139810    8800 logs.go:276] 0 containers: []
	W0327 16:43:40.139823    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:43:40.139882    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:43:40.150273    8800 logs.go:276] 2 containers: [b4f5d0ba182b 579fe35ef076]
	I0327 16:43:40.150289    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:43:40.150295    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:43:40.173899    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:43:40.173907    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:43:40.208692    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:43:40.208700    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:43:40.213645    8800 logs.go:123] Gathering logs for etcd [5429db75cd75] ...
	I0327 16:43:40.213654    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5429db75cd75"
	I0327 16:43:40.228196    8800 logs.go:123] Gathering logs for kube-scheduler [b68fec8d8767] ...
	I0327 16:43:40.228209    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68fec8d8767"
	I0327 16:43:40.241432    8800 logs.go:123] Gathering logs for kube-controller-manager [385805912c59] ...
	I0327 16:43:40.241444    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385805912c59"
	I0327 16:43:40.259424    8800 logs.go:123] Gathering logs for kube-controller-manager [1a8c10b8da56] ...
	I0327 16:43:40.259434    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a8c10b8da56"
	I0327 16:43:40.277240    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:43:40.277251    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:43:40.333407    8800 logs.go:123] Gathering logs for kube-apiserver [0703d422a652] ...
	I0327 16:43:40.333418    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0703d422a652"
	I0327 16:43:40.347528    8800 logs.go:123] Gathering logs for kube-apiserver [126b9553cb40] ...
	I0327 16:43:40.347537    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126b9553cb40"
	I0327 16:43:40.366914    8800 logs.go:123] Gathering logs for coredns [2ef14eacffaa] ...
	I0327 16:43:40.366925    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef14eacffaa"
	I0327 16:43:40.377874    8800 logs.go:123] Gathering logs for kube-scheduler [c0c992e34b5d] ...
	I0327 16:43:40.377884    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0c992e34b5d"
	I0327 16:43:40.395591    8800 logs.go:123] Gathering logs for kube-proxy [2bfb39e97e1d] ...
	I0327 16:43:40.395604    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bfb39e97e1d"
	I0327 16:43:40.406817    8800 logs.go:123] Gathering logs for etcd [4f67bfb30b49] ...
	I0327 16:43:40.406830    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f67bfb30b49"
	I0327 16:43:40.420570    8800 logs.go:123] Gathering logs for storage-provisioner [b4f5d0ba182b] ...
	I0327 16:43:40.420580    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f5d0ba182b"
	I0327 16:43:40.431838    8800 logs.go:123] Gathering logs for storage-provisioner [579fe35ef076] ...
	I0327 16:43:40.431848    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 579fe35ef076"
	I0327 16:43:40.442932    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:43:40.442941    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:43:42.954938    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:43:47.956923    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:43:47.957176    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:43:47.979862    8800 logs.go:276] 2 containers: [0703d422a652 126b9553cb40]
	I0327 16:43:47.979985    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:43:47.995646    8800 logs.go:276] 2 containers: [4f67bfb30b49 5429db75cd75]
	I0327 16:43:47.995734    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:43:48.008302    8800 logs.go:276] 1 containers: [2ef14eacffaa]
	I0327 16:43:48.008377    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:43:48.019684    8800 logs.go:276] 2 containers: [b68fec8d8767 c0c992e34b5d]
	I0327 16:43:48.019755    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:43:48.029814    8800 logs.go:276] 1 containers: [2bfb39e97e1d]
	I0327 16:43:48.029885    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:43:48.041300    8800 logs.go:276] 2 containers: [385805912c59 1a8c10b8da56]
	I0327 16:43:48.041369    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:43:48.053255    8800 logs.go:276] 0 containers: []
	W0327 16:43:48.053267    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:43:48.053322    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:43:48.063898    8800 logs.go:276] 2 containers: [b4f5d0ba182b 579fe35ef076]
	I0327 16:43:48.063917    8800 logs.go:123] Gathering logs for kube-apiserver [0703d422a652] ...
	I0327 16:43:48.063923    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0703d422a652"
	I0327 16:43:48.077909    8800 logs.go:123] Gathering logs for kube-apiserver [126b9553cb40] ...
	I0327 16:43:48.077918    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126b9553cb40"
	I0327 16:43:48.097162    8800 logs.go:123] Gathering logs for kube-scheduler [b68fec8d8767] ...
	I0327 16:43:48.097177    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68fec8d8767"
	I0327 16:43:48.108757    8800 logs.go:123] Gathering logs for kube-controller-manager [385805912c59] ...
	I0327 16:43:48.108771    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385805912c59"
	I0327 16:43:48.131229    8800 logs.go:123] Gathering logs for kube-controller-manager [1a8c10b8da56] ...
	I0327 16:43:48.131239    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a8c10b8da56"
	I0327 16:43:48.146685    8800 logs.go:123] Gathering logs for storage-provisioner [b4f5d0ba182b] ...
	I0327 16:43:48.146698    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f5d0ba182b"
	I0327 16:43:48.158703    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:43:48.158712    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:43:48.195524    8800 logs.go:123] Gathering logs for etcd [4f67bfb30b49] ...
	I0327 16:43:48.195534    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f67bfb30b49"
	I0327 16:43:48.209152    8800 logs.go:123] Gathering logs for coredns [2ef14eacffaa] ...
	I0327 16:43:48.209162    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef14eacffaa"
	I0327 16:43:48.220386    8800 logs.go:123] Gathering logs for storage-provisioner [579fe35ef076] ...
	I0327 16:43:48.220396    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 579fe35ef076"
	I0327 16:43:48.231757    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:43:48.231768    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:43:48.243994    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:43:48.244004    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:43:48.248453    8800 logs.go:123] Gathering logs for etcd [5429db75cd75] ...
	I0327 16:43:48.248461    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5429db75cd75"
	I0327 16:43:48.263590    8800 logs.go:123] Gathering logs for kube-scheduler [c0c992e34b5d] ...
	I0327 16:43:48.263605    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0c992e34b5d"
	I0327 16:43:48.280019    8800 logs.go:123] Gathering logs for kube-proxy [2bfb39e97e1d] ...
	I0327 16:43:48.280031    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bfb39e97e1d"
	I0327 16:43:48.296646    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:43:48.296659    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:43:48.335592    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:43:48.335607    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:43:50.863895    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:43:55.864830    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:43:55.865033    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:43:55.884501    8800 logs.go:276] 2 containers: [0703d422a652 126b9553cb40]
	I0327 16:43:55.884599    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:43:55.899127    8800 logs.go:276] 2 containers: [4f67bfb30b49 5429db75cd75]
	I0327 16:43:55.899203    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:43:55.910901    8800 logs.go:276] 1 containers: [2ef14eacffaa]
	I0327 16:43:55.910968    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:43:55.922156    8800 logs.go:276] 2 containers: [b68fec8d8767 c0c992e34b5d]
	I0327 16:43:55.922231    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:43:55.933059    8800 logs.go:276] 1 containers: [2bfb39e97e1d]
	I0327 16:43:55.933132    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:43:55.943972    8800 logs.go:276] 2 containers: [385805912c59 1a8c10b8da56]
	I0327 16:43:55.944034    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:43:55.958604    8800 logs.go:276] 0 containers: []
	W0327 16:43:55.958618    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:43:55.958677    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:43:55.969233    8800 logs.go:276] 2 containers: [b4f5d0ba182b 579fe35ef076]
	I0327 16:43:55.969249    8800 logs.go:123] Gathering logs for storage-provisioner [579fe35ef076] ...
	I0327 16:43:55.969253    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 579fe35ef076"
	I0327 16:43:55.982916    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:43:55.982926    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:43:55.995084    8800 logs.go:123] Gathering logs for kube-scheduler [c0c992e34b5d] ...
	I0327 16:43:55.995098    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0c992e34b5d"
	I0327 16:43:56.010177    8800 logs.go:123] Gathering logs for coredns [2ef14eacffaa] ...
	I0327 16:43:56.010189    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef14eacffaa"
	I0327 16:43:56.025549    8800 logs.go:123] Gathering logs for etcd [4f67bfb30b49] ...
	I0327 16:43:56.025561    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f67bfb30b49"
	I0327 16:43:56.039472    8800 logs.go:123] Gathering logs for etcd [5429db75cd75] ...
	I0327 16:43:56.039485    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5429db75cd75"
	I0327 16:43:56.054262    8800 logs.go:123] Gathering logs for kube-controller-manager [385805912c59] ...
	I0327 16:43:56.054275    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385805912c59"
	I0327 16:43:56.073982    8800 logs.go:123] Gathering logs for kube-controller-manager [1a8c10b8da56] ...
	I0327 16:43:56.073992    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a8c10b8da56"
	I0327 16:43:56.089280    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:43:56.089289    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:43:56.125148    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:43:56.125158    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:43:56.160057    8800 logs.go:123] Gathering logs for kube-apiserver [0703d422a652] ...
	I0327 16:43:56.160069    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0703d422a652"
	I0327 16:43:56.173574    8800 logs.go:123] Gathering logs for kube-apiserver [126b9553cb40] ...
	I0327 16:43:56.173583    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126b9553cb40"
	I0327 16:43:56.198542    8800 logs.go:123] Gathering logs for kube-scheduler [b68fec8d8767] ...
	I0327 16:43:56.198557    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68fec8d8767"
	I0327 16:43:56.210057    8800 logs.go:123] Gathering logs for kube-proxy [2bfb39e97e1d] ...
	I0327 16:43:56.210068    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bfb39e97e1d"
	I0327 16:43:56.222236    8800 logs.go:123] Gathering logs for storage-provisioner [b4f5d0ba182b] ...
	I0327 16:43:56.222248    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f5d0ba182b"
	I0327 16:43:56.233944    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:43:56.233957    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:43:56.257664    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:43:56.257671    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:43:58.762609    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:44:03.764689    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:44:03.764861    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:44:03.779788    8800 logs.go:276] 2 containers: [0703d422a652 126b9553cb40]
	I0327 16:44:03.779858    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:44:03.792011    8800 logs.go:276] 2 containers: [4f67bfb30b49 5429db75cd75]
	I0327 16:44:03.792085    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:44:03.803106    8800 logs.go:276] 1 containers: [2ef14eacffaa]
	I0327 16:44:03.803174    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:44:03.818917    8800 logs.go:276] 2 containers: [b68fec8d8767 c0c992e34b5d]
	I0327 16:44:03.818981    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:44:03.829696    8800 logs.go:276] 1 containers: [2bfb39e97e1d]
	I0327 16:44:03.829771    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:44:03.840851    8800 logs.go:276] 2 containers: [385805912c59 1a8c10b8da56]
	I0327 16:44:03.840941    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:44:03.851079    8800 logs.go:276] 0 containers: []
	W0327 16:44:03.851092    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:44:03.851155    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:44:03.862341    8800 logs.go:276] 2 containers: [b4f5d0ba182b 579fe35ef076]
	I0327 16:44:03.862357    8800 logs.go:123] Gathering logs for coredns [2ef14eacffaa] ...
	I0327 16:44:03.862363    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef14eacffaa"
	I0327 16:44:03.877396    8800 logs.go:123] Gathering logs for etcd [5429db75cd75] ...
	I0327 16:44:03.877407    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5429db75cd75"
	I0327 16:44:03.891380    8800 logs.go:123] Gathering logs for kube-scheduler [b68fec8d8767] ...
	I0327 16:44:03.891389    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68fec8d8767"
	I0327 16:44:03.903722    8800 logs.go:123] Gathering logs for kube-scheduler [c0c992e34b5d] ...
	I0327 16:44:03.903733    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0c992e34b5d"
	I0327 16:44:03.919038    8800 logs.go:123] Gathering logs for storage-provisioner [579fe35ef076] ...
	I0327 16:44:03.919048    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 579fe35ef076"
	I0327 16:44:03.930868    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:44:03.930878    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:44:03.955891    8800 logs.go:123] Gathering logs for kube-apiserver [126b9553cb40] ...
	I0327 16:44:03.955902    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126b9553cb40"
	I0327 16:44:03.974403    8800 logs.go:123] Gathering logs for kube-apiserver [0703d422a652] ...
	I0327 16:44:03.974416    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0703d422a652"
	I0327 16:44:03.988244    8800 logs.go:123] Gathering logs for etcd [4f67bfb30b49] ...
	I0327 16:44:03.988255    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f67bfb30b49"
	I0327 16:44:04.002318    8800 logs.go:123] Gathering logs for kube-proxy [2bfb39e97e1d] ...
	I0327 16:44:04.002330    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bfb39e97e1d"
	I0327 16:44:04.014304    8800 logs.go:123] Gathering logs for kube-controller-manager [385805912c59] ...
	I0327 16:44:04.014314    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385805912c59"
	I0327 16:44:04.031980    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:44:04.031991    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:44:04.043949    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:44:04.043960    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:44:04.079445    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:44:04.079456    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:44:04.083779    8800 logs.go:123] Gathering logs for kube-controller-manager [1a8c10b8da56] ...
	I0327 16:44:04.083789    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a8c10b8da56"
	I0327 16:44:04.099001    8800 logs.go:123] Gathering logs for storage-provisioner [b4f5d0ba182b] ...
	I0327 16:44:04.099012    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f5d0ba182b"
	I0327 16:44:04.110477    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:44:04.110487    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:44:06.647206    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:44:11.648406    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:44:11.648640    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:44:11.680232    8800 logs.go:276] 2 containers: [0703d422a652 126b9553cb40]
	I0327 16:44:11.680332    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:44:11.696196    8800 logs.go:276] 2 containers: [4f67bfb30b49 5429db75cd75]
	I0327 16:44:11.696274    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:44:11.715509    8800 logs.go:276] 1 containers: [2ef14eacffaa]
	I0327 16:44:11.715584    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:44:11.725674    8800 logs.go:276] 2 containers: [b68fec8d8767 c0c992e34b5d]
	I0327 16:44:11.725756    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:44:11.736809    8800 logs.go:276] 1 containers: [2bfb39e97e1d]
	I0327 16:44:11.736880    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:44:11.747342    8800 logs.go:276] 2 containers: [385805912c59 1a8c10b8da56]
	I0327 16:44:11.747408    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:44:11.757413    8800 logs.go:276] 0 containers: []
	W0327 16:44:11.757427    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:44:11.757482    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:44:11.767719    8800 logs.go:276] 2 containers: [b4f5d0ba182b 579fe35ef076]
	I0327 16:44:11.767737    8800 logs.go:123] Gathering logs for kube-proxy [2bfb39e97e1d] ...
	I0327 16:44:11.767743    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bfb39e97e1d"
	I0327 16:44:11.779267    8800 logs.go:123] Gathering logs for kube-controller-manager [1a8c10b8da56] ...
	I0327 16:44:11.779280    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a8c10b8da56"
	I0327 16:44:11.794385    8800 logs.go:123] Gathering logs for storage-provisioner [b4f5d0ba182b] ...
	I0327 16:44:11.794395    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f5d0ba182b"
	I0327 16:44:11.805765    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:44:11.805777    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:44:11.817661    8800 logs.go:123] Gathering logs for kube-apiserver [0703d422a652] ...
	I0327 16:44:11.817672    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0703d422a652"
	I0327 16:44:11.832034    8800 logs.go:123] Gathering logs for etcd [4f67bfb30b49] ...
	I0327 16:44:11.832044    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f67bfb30b49"
	I0327 16:44:11.846551    8800 logs.go:123] Gathering logs for etcd [5429db75cd75] ...
	I0327 16:44:11.846562    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5429db75cd75"
	I0327 16:44:11.861207    8800 logs.go:123] Gathering logs for coredns [2ef14eacffaa] ...
	I0327 16:44:11.861217    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef14eacffaa"
	I0327 16:44:11.872563    8800 logs.go:123] Gathering logs for storage-provisioner [579fe35ef076] ...
	I0327 16:44:11.872574    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 579fe35ef076"
	I0327 16:44:11.883651    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:44:11.883663    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:44:11.919928    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:44:11.919938    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:44:11.924536    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:44:11.924545    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:44:11.960006    8800 logs.go:123] Gathering logs for kube-controller-manager [385805912c59] ...
	I0327 16:44:11.960017    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385805912c59"
	I0327 16:44:11.978030    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:44:11.978043    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:44:12.002420    8800 logs.go:123] Gathering logs for kube-apiserver [126b9553cb40] ...
	I0327 16:44:12.002431    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126b9553cb40"
	I0327 16:44:12.021951    8800 logs.go:123] Gathering logs for kube-scheduler [b68fec8d8767] ...
	I0327 16:44:12.021963    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68fec8d8767"
	I0327 16:44:12.034175    8800 logs.go:123] Gathering logs for kube-scheduler [c0c992e34b5d] ...
	I0327 16:44:12.034185    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0c992e34b5d"
	I0327 16:44:14.552125    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:44:19.554916    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:44:19.555485    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:44:19.605203    8800 logs.go:276] 2 containers: [0703d422a652 126b9553cb40]
	I0327 16:44:19.605361    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:44:19.623959    8800 logs.go:276] 2 containers: [4f67bfb30b49 5429db75cd75]
	I0327 16:44:19.624057    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:44:19.638182    8800 logs.go:276] 1 containers: [2ef14eacffaa]
	I0327 16:44:19.638259    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:44:19.650008    8800 logs.go:276] 2 containers: [b68fec8d8767 c0c992e34b5d]
	I0327 16:44:19.650077    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:44:19.665112    8800 logs.go:276] 1 containers: [2bfb39e97e1d]
	I0327 16:44:19.665183    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:44:19.676415    8800 logs.go:276] 2 containers: [385805912c59 1a8c10b8da56]
	I0327 16:44:19.676484    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:44:19.687144    8800 logs.go:276] 0 containers: []
	W0327 16:44:19.687157    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:44:19.687218    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:44:19.698038    8800 logs.go:276] 2 containers: [b4f5d0ba182b 579fe35ef076]
	I0327 16:44:19.698058    8800 logs.go:123] Gathering logs for coredns [2ef14eacffaa] ...
	I0327 16:44:19.698063    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef14eacffaa"
	I0327 16:44:19.709614    8800 logs.go:123] Gathering logs for kube-proxy [2bfb39e97e1d] ...
	I0327 16:44:19.709624    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bfb39e97e1d"
	I0327 16:44:19.721213    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:44:19.721224    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:44:19.746646    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:44:19.746655    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:44:19.760855    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:44:19.760871    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:44:19.797743    8800 logs.go:123] Gathering logs for kube-apiserver [0703d422a652] ...
	I0327 16:44:19.797752    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0703d422a652"
	I0327 16:44:19.815058    8800 logs.go:123] Gathering logs for kube-apiserver [126b9553cb40] ...
	I0327 16:44:19.815070    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126b9553cb40"
	I0327 16:44:19.834438    8800 logs.go:123] Gathering logs for kube-controller-manager [1a8c10b8da56] ...
	I0327 16:44:19.834447    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a8c10b8da56"
	I0327 16:44:19.849521    8800 logs.go:123] Gathering logs for etcd [5429db75cd75] ...
	I0327 16:44:19.849535    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5429db75cd75"
	I0327 16:44:19.864282    8800 logs.go:123] Gathering logs for kube-scheduler [b68fec8d8767] ...
	I0327 16:44:19.864292    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68fec8d8767"
	I0327 16:44:19.875599    8800 logs.go:123] Gathering logs for kube-scheduler [c0c992e34b5d] ...
	I0327 16:44:19.875608    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0c992e34b5d"
	I0327 16:44:19.890784    8800 logs.go:123] Gathering logs for kube-controller-manager [385805912c59] ...
	I0327 16:44:19.890794    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385805912c59"
	I0327 16:44:19.908362    8800 logs.go:123] Gathering logs for storage-provisioner [b4f5d0ba182b] ...
	I0327 16:44:19.908375    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f5d0ba182b"
	I0327 16:44:19.920322    8800 logs.go:123] Gathering logs for storage-provisioner [579fe35ef076] ...
	I0327 16:44:19.920336    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 579fe35ef076"
	I0327 16:44:19.936285    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:44:19.936297    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:44:19.940954    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:44:19.940962    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:44:19.984218    8800 logs.go:123] Gathering logs for etcd [4f67bfb30b49] ...
	I0327 16:44:19.984231    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f67bfb30b49"
	I0327 16:44:22.499684    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:44:27.502133    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:44:27.502298    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:44:27.513964    8800 logs.go:276] 2 containers: [0703d422a652 126b9553cb40]
	I0327 16:44:27.514035    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:44:27.524620    8800 logs.go:276] 2 containers: [4f67bfb30b49 5429db75cd75]
	I0327 16:44:27.524690    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:44:27.539755    8800 logs.go:276] 1 containers: [2ef14eacffaa]
	I0327 16:44:27.539820    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:44:27.550883    8800 logs.go:276] 2 containers: [b68fec8d8767 c0c992e34b5d]
	I0327 16:44:27.550953    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:44:27.561129    8800 logs.go:276] 1 containers: [2bfb39e97e1d]
	I0327 16:44:27.561199    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:44:27.572050    8800 logs.go:276] 2 containers: [385805912c59 1a8c10b8da56]
	I0327 16:44:27.572120    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:44:27.582595    8800 logs.go:276] 0 containers: []
	W0327 16:44:27.582607    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:44:27.582669    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:44:27.597091    8800 logs.go:276] 2 containers: [b4f5d0ba182b 579fe35ef076]
	I0327 16:44:27.597113    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:44:27.597118    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:44:27.633064    8800 logs.go:123] Gathering logs for kube-apiserver [126b9553cb40] ...
	I0327 16:44:27.633075    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126b9553cb40"
	I0327 16:44:27.652911    8800 logs.go:123] Gathering logs for kube-scheduler [c0c992e34b5d] ...
	I0327 16:44:27.652924    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0c992e34b5d"
	I0327 16:44:27.668205    8800 logs.go:123] Gathering logs for storage-provisioner [579fe35ef076] ...
	I0327 16:44:27.668213    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 579fe35ef076"
	I0327 16:44:27.679896    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:44:27.679907    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:44:27.703730    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:44:27.703738    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:44:27.708312    8800 logs.go:123] Gathering logs for kube-controller-manager [385805912c59] ...
	I0327 16:44:27.708321    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385805912c59"
	I0327 16:44:27.733008    8800 logs.go:123] Gathering logs for kube-controller-manager [1a8c10b8da56] ...
	I0327 16:44:27.733018    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a8c10b8da56"
	I0327 16:44:27.747792    8800 logs.go:123] Gathering logs for storage-provisioner [b4f5d0ba182b] ...
	I0327 16:44:27.747808    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f5d0ba182b"
	I0327 16:44:27.759785    8800 logs.go:123] Gathering logs for kube-apiserver [0703d422a652] ...
	I0327 16:44:27.759795    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0703d422a652"
	I0327 16:44:27.774203    8800 logs.go:123] Gathering logs for kube-scheduler [b68fec8d8767] ...
	I0327 16:44:27.774212    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68fec8d8767"
	I0327 16:44:27.785932    8800 logs.go:123] Gathering logs for kube-proxy [2bfb39e97e1d] ...
	I0327 16:44:27.785942    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bfb39e97e1d"
	I0327 16:44:27.798048    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:44:27.798058    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:44:27.835371    8800 logs.go:123] Gathering logs for etcd [4f67bfb30b49] ...
	I0327 16:44:27.835382    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f67bfb30b49"
	I0327 16:44:27.849634    8800 logs.go:123] Gathering logs for etcd [5429db75cd75] ...
	I0327 16:44:27.849644    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5429db75cd75"
	I0327 16:44:27.864627    8800 logs.go:123] Gathering logs for coredns [2ef14eacffaa] ...
	I0327 16:44:27.864637    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef14eacffaa"
	I0327 16:44:27.878772    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:44:27.878782    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:44:30.392174    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:44:35.394360    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:44:35.394914    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:44:35.441550    8800 logs.go:276] 2 containers: [0703d422a652 126b9553cb40]
	I0327 16:44:35.441669    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:44:35.462659    8800 logs.go:276] 2 containers: [4f67bfb30b49 5429db75cd75]
	I0327 16:44:35.462760    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:44:35.480926    8800 logs.go:276] 1 containers: [2ef14eacffaa]
	I0327 16:44:35.481006    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:44:35.494116    8800 logs.go:276] 2 containers: [b68fec8d8767 c0c992e34b5d]
	I0327 16:44:35.494195    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:44:35.504856    8800 logs.go:276] 1 containers: [2bfb39e97e1d]
	I0327 16:44:35.504927    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:44:35.515769    8800 logs.go:276] 2 containers: [385805912c59 1a8c10b8da56]
	I0327 16:44:35.515835    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:44:35.526402    8800 logs.go:276] 0 containers: []
	W0327 16:44:35.526419    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:44:35.526471    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:44:35.536720    8800 logs.go:276] 2 containers: [b4f5d0ba182b 579fe35ef076]
	I0327 16:44:35.536737    8800 logs.go:123] Gathering logs for kube-controller-manager [1a8c10b8da56] ...
	I0327 16:44:35.536743    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a8c10b8da56"
	I0327 16:44:35.551748    8800 logs.go:123] Gathering logs for kube-scheduler [b68fec8d8767] ...
	I0327 16:44:35.551759    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68fec8d8767"
	I0327 16:44:35.563901    8800 logs.go:123] Gathering logs for kube-proxy [2bfb39e97e1d] ...
	I0327 16:44:35.563913    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bfb39e97e1d"
	I0327 16:44:35.575386    8800 logs.go:123] Gathering logs for kube-apiserver [126b9553cb40] ...
	I0327 16:44:35.575404    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126b9553cb40"
	I0327 16:44:35.594069    8800 logs.go:123] Gathering logs for storage-provisioner [579fe35ef076] ...
	I0327 16:44:35.594078    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 579fe35ef076"
	I0327 16:44:35.605125    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:44:35.605135    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:44:35.641673    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:44:35.641681    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:44:35.645758    8800 logs.go:123] Gathering logs for kube-controller-manager [385805912c59] ...
	I0327 16:44:35.645765    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385805912c59"
	I0327 16:44:35.662942    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:44:35.662953    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:44:35.686707    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:44:35.686714    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:44:35.721057    8800 logs.go:123] Gathering logs for etcd [4f67bfb30b49] ...
	I0327 16:44:35.721070    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f67bfb30b49"
	I0327 16:44:35.735416    8800 logs.go:123] Gathering logs for coredns [2ef14eacffaa] ...
	I0327 16:44:35.735426    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef14eacffaa"
	I0327 16:44:35.746710    8800 logs.go:123] Gathering logs for kube-scheduler [c0c992e34b5d] ...
	I0327 16:44:35.746720    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0c992e34b5d"
	I0327 16:44:35.761850    8800 logs.go:123] Gathering logs for storage-provisioner [b4f5d0ba182b] ...
	I0327 16:44:35.761860    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f5d0ba182b"
	I0327 16:44:35.773199    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:44:35.773212    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:44:35.785135    8800 logs.go:123] Gathering logs for kube-apiserver [0703d422a652] ...
	I0327 16:44:35.785145    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0703d422a652"
	I0327 16:44:35.799791    8800 logs.go:123] Gathering logs for etcd [5429db75cd75] ...
	I0327 16:44:35.799802    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5429db75cd75"
	I0327 16:44:38.321062    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:44:43.322682    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:44:43.322775    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:44:43.335628    8800 logs.go:276] 2 containers: [0703d422a652 126b9553cb40]
	I0327 16:44:43.335700    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:44:43.346964    8800 logs.go:276] 2 containers: [4f67bfb30b49 5429db75cd75]
	I0327 16:44:43.347036    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:44:43.358655    8800 logs.go:276] 1 containers: [2ef14eacffaa]
	I0327 16:44:43.358735    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:44:43.370649    8800 logs.go:276] 2 containers: [b68fec8d8767 c0c992e34b5d]
	I0327 16:44:43.370726    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:44:43.383365    8800 logs.go:276] 1 containers: [2bfb39e97e1d]
	I0327 16:44:43.383439    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:44:43.395743    8800 logs.go:276] 2 containers: [385805912c59 1a8c10b8da56]
	I0327 16:44:43.395820    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:44:43.407868    8800 logs.go:276] 0 containers: []
	W0327 16:44:43.407883    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:44:43.407947    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:44:43.419955    8800 logs.go:276] 2 containers: [b4f5d0ba182b 579fe35ef076]
	I0327 16:44:43.419974    8800 logs.go:123] Gathering logs for kube-apiserver [126b9553cb40] ...
	I0327 16:44:43.419980    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126b9553cb40"
	I0327 16:44:43.444495    8800 logs.go:123] Gathering logs for kube-scheduler [c0c992e34b5d] ...
	I0327 16:44:43.444514    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0c992e34b5d"
	I0327 16:44:43.461647    8800 logs.go:123] Gathering logs for storage-provisioner [b4f5d0ba182b] ...
	I0327 16:44:43.461661    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f5d0ba182b"
	I0327 16:44:43.474427    8800 logs.go:123] Gathering logs for kube-apiserver [0703d422a652] ...
	I0327 16:44:43.474441    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0703d422a652"
	I0327 16:44:43.489903    8800 logs.go:123] Gathering logs for kube-controller-manager [1a8c10b8da56] ...
	I0327 16:44:43.489916    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a8c10b8da56"
	I0327 16:44:43.507084    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:44:43.507097    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:44:43.535578    8800 logs.go:123] Gathering logs for storage-provisioner [579fe35ef076] ...
	I0327 16:44:43.535595    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 579fe35ef076"
	I0327 16:44:43.551480    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:44:43.551492    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:44:43.591445    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:44:43.591464    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:44:43.636617    8800 logs.go:123] Gathering logs for etcd [5429db75cd75] ...
	I0327 16:44:43.636631    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5429db75cd75"
	I0327 16:44:43.652877    8800 logs.go:123] Gathering logs for coredns [2ef14eacffaa] ...
	I0327 16:44:43.652888    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef14eacffaa"
	I0327 16:44:43.671032    8800 logs.go:123] Gathering logs for kube-scheduler [b68fec8d8767] ...
	I0327 16:44:43.671044    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68fec8d8767"
	I0327 16:44:43.683772    8800 logs.go:123] Gathering logs for kube-proxy [2bfb39e97e1d] ...
	I0327 16:44:43.683783    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bfb39e97e1d"
	I0327 16:44:43.700460    8800 logs.go:123] Gathering logs for kube-controller-manager [385805912c59] ...
	I0327 16:44:43.700471    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385805912c59"
	I0327 16:44:43.720347    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:44:43.720361    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:44:43.738508    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:44:43.738523    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:44:43.743336    8800 logs.go:123] Gathering logs for etcd [4f67bfb30b49] ...
	I0327 16:44:43.743348    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f67bfb30b49"
	I0327 16:44:46.259231    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:44:51.261208    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:44:51.261358    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:44:51.273836    8800 logs.go:276] 2 containers: [0703d422a652 126b9553cb40]
	I0327 16:44:51.273920    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:44:51.286163    8800 logs.go:276] 2 containers: [4f67bfb30b49 5429db75cd75]
	I0327 16:44:51.286232    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:44:51.298484    8800 logs.go:276] 1 containers: [2ef14eacffaa]
	I0327 16:44:51.298559    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:44:51.309677    8800 logs.go:276] 2 containers: [b68fec8d8767 c0c992e34b5d]
	I0327 16:44:51.309778    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:44:51.324122    8800 logs.go:276] 1 containers: [2bfb39e97e1d]
	I0327 16:44:51.324195    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:44:51.335123    8800 logs.go:276] 2 containers: [385805912c59 1a8c10b8da56]
	I0327 16:44:51.335189    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:44:51.346040    8800 logs.go:276] 0 containers: []
	W0327 16:44:51.346051    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:44:51.346107    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:44:51.359785    8800 logs.go:276] 2 containers: [b4f5d0ba182b 579fe35ef076]
	I0327 16:44:51.359852    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:44:51.359860    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:44:51.372720    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:44:51.372733    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:44:51.377293    8800 logs.go:123] Gathering logs for kube-apiserver [126b9553cb40] ...
	I0327 16:44:51.377302    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126b9553cb40"
	I0327 16:44:51.397271    8800 logs.go:123] Gathering logs for etcd [4f67bfb30b49] ...
	I0327 16:44:51.397282    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f67bfb30b49"
	I0327 16:44:51.412193    8800 logs.go:123] Gathering logs for coredns [2ef14eacffaa] ...
	I0327 16:44:51.412204    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef14eacffaa"
	I0327 16:44:51.423674    8800 logs.go:123] Gathering logs for kube-proxy [2bfb39e97e1d] ...
	I0327 16:44:51.423686    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bfb39e97e1d"
	I0327 16:44:51.435661    8800 logs.go:123] Gathering logs for kube-controller-manager [385805912c59] ...
	I0327 16:44:51.435674    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385805912c59"
	I0327 16:44:51.455041    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:44:51.455060    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:44:51.493917    8800 logs.go:123] Gathering logs for kube-controller-manager [1a8c10b8da56] ...
	I0327 16:44:51.493945    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a8c10b8da56"
	I0327 16:44:51.511516    8800 logs.go:123] Gathering logs for storage-provisioner [579fe35ef076] ...
	I0327 16:44:51.511528    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 579fe35ef076"
	I0327 16:44:51.523712    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:44:51.523723    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:44:51.548365    8800 logs.go:123] Gathering logs for kube-apiserver [0703d422a652] ...
	I0327 16:44:51.548380    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0703d422a652"
	I0327 16:44:51.562482    8800 logs.go:123] Gathering logs for storage-provisioner [b4f5d0ba182b] ...
	I0327 16:44:51.562492    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f5d0ba182b"
	I0327 16:44:51.574169    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:44:51.574181    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:44:51.610426    8800 logs.go:123] Gathering logs for etcd [5429db75cd75] ...
	I0327 16:44:51.610439    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5429db75cd75"
	I0327 16:44:51.625200    8800 logs.go:123] Gathering logs for kube-scheduler [b68fec8d8767] ...
	I0327 16:44:51.625216    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68fec8d8767"
	I0327 16:44:51.638122    8800 logs.go:123] Gathering logs for kube-scheduler [c0c992e34b5d] ...
	I0327 16:44:51.638132    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0c992e34b5d"
	I0327 16:44:54.155749    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:44:59.157943    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:44:59.158116    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:44:59.170434    8800 logs.go:276] 2 containers: [0703d422a652 126b9553cb40]
	I0327 16:44:59.170506    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:44:59.181396    8800 logs.go:276] 2 containers: [4f67bfb30b49 5429db75cd75]
	I0327 16:44:59.181460    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:44:59.192251    8800 logs.go:276] 1 containers: [2ef14eacffaa]
	I0327 16:44:59.192316    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:44:59.207027    8800 logs.go:276] 2 containers: [b68fec8d8767 c0c992e34b5d]
	I0327 16:44:59.207100    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:44:59.217655    8800 logs.go:276] 1 containers: [2bfb39e97e1d]
	I0327 16:44:59.217721    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:44:59.228492    8800 logs.go:276] 2 containers: [385805912c59 1a8c10b8da56]
	I0327 16:44:59.228561    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:44:59.239307    8800 logs.go:276] 0 containers: []
	W0327 16:44:59.239316    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:44:59.239370    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:44:59.250430    8800 logs.go:276] 2 containers: [b4f5d0ba182b 579fe35ef076]
	I0327 16:44:59.250450    8800 logs.go:123] Gathering logs for kube-proxy [2bfb39e97e1d] ...
	I0327 16:44:59.250455    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bfb39e97e1d"
	I0327 16:44:59.265994    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:44:59.266004    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:44:59.303226    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:44:59.303237    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:44:59.308277    8800 logs.go:123] Gathering logs for coredns [2ef14eacffaa] ...
	I0327 16:44:59.308284    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef14eacffaa"
	I0327 16:44:59.319589    8800 logs.go:123] Gathering logs for kube-scheduler [b68fec8d8767] ...
	I0327 16:44:59.319599    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68fec8d8767"
	I0327 16:44:59.336118    8800 logs.go:123] Gathering logs for kube-scheduler [c0c992e34b5d] ...
	I0327 16:44:59.336129    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0c992e34b5d"
	I0327 16:44:59.351412    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:44:59.351423    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:44:59.386088    8800 logs.go:123] Gathering logs for storage-provisioner [579fe35ef076] ...
	I0327 16:44:59.386102    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 579fe35ef076"
	I0327 16:44:59.398409    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:44:59.398421    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:44:59.422038    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:44:59.422048    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:44:59.433697    8800 logs.go:123] Gathering logs for kube-apiserver [126b9553cb40] ...
	I0327 16:44:59.433708    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126b9553cb40"
	I0327 16:44:59.454263    8800 logs.go:123] Gathering logs for etcd [4f67bfb30b49] ...
	I0327 16:44:59.454273    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f67bfb30b49"
	I0327 16:44:59.468346    8800 logs.go:123] Gathering logs for kube-controller-manager [385805912c59] ...
	I0327 16:44:59.468357    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385805912c59"
	I0327 16:44:59.486110    8800 logs.go:123] Gathering logs for kube-controller-manager [1a8c10b8da56] ...
	I0327 16:44:59.486121    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a8c10b8da56"
	I0327 16:44:59.501896    8800 logs.go:123] Gathering logs for storage-provisioner [b4f5d0ba182b] ...
	I0327 16:44:59.501908    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f5d0ba182b"
	I0327 16:44:59.512959    8800 logs.go:123] Gathering logs for kube-apiserver [0703d422a652] ...
	I0327 16:44:59.512970    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0703d422a652"
	I0327 16:44:59.526624    8800 logs.go:123] Gathering logs for etcd [5429db75cd75] ...
	I0327 16:44:59.526635    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5429db75cd75"
	I0327 16:45:02.043106    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:45:07.045245    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:45:07.045340    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:45:07.057860    8800 logs.go:276] 2 containers: [0703d422a652 126b9553cb40]
	I0327 16:45:07.057939    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:45:07.068250    8800 logs.go:276] 2 containers: [4f67bfb30b49 5429db75cd75]
	I0327 16:45:07.068324    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:45:07.079162    8800 logs.go:276] 1 containers: [2ef14eacffaa]
	I0327 16:45:07.079227    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:45:07.089934    8800 logs.go:276] 2 containers: [b68fec8d8767 c0c992e34b5d]
	I0327 16:45:07.090008    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:45:07.100482    8800 logs.go:276] 1 containers: [2bfb39e97e1d]
	I0327 16:45:07.100551    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:45:07.113708    8800 logs.go:276] 2 containers: [385805912c59 1a8c10b8da56]
	I0327 16:45:07.113777    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:45:07.124607    8800 logs.go:276] 0 containers: []
	W0327 16:45:07.124622    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:45:07.124686    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:45:07.135818    8800 logs.go:276] 2 containers: [b4f5d0ba182b 579fe35ef076]
	I0327 16:45:07.135837    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:45:07.135875    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:45:07.140561    8800 logs.go:123] Gathering logs for coredns [2ef14eacffaa] ...
	I0327 16:45:07.140566    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef14eacffaa"
	I0327 16:45:07.151832    8800 logs.go:123] Gathering logs for kube-scheduler [b68fec8d8767] ...
	I0327 16:45:07.151843    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68fec8d8767"
	I0327 16:45:07.164346    8800 logs.go:123] Gathering logs for kube-scheduler [c0c992e34b5d] ...
	I0327 16:45:07.164357    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0c992e34b5d"
	I0327 16:45:07.179707    8800 logs.go:123] Gathering logs for kube-controller-manager [1a8c10b8da56] ...
	I0327 16:45:07.179721    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a8c10b8da56"
	I0327 16:45:07.194815    8800 logs.go:123] Gathering logs for storage-provisioner [b4f5d0ba182b] ...
	I0327 16:45:07.194831    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f5d0ba182b"
	I0327 16:45:07.206765    8800 logs.go:123] Gathering logs for storage-provisioner [579fe35ef076] ...
	I0327 16:45:07.206775    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 579fe35ef076"
	I0327 16:45:07.218613    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:45:07.218627    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:45:07.254703    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:45:07.254713    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:45:07.267006    8800 logs.go:123] Gathering logs for kube-apiserver [0703d422a652] ...
	I0327 16:45:07.267016    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0703d422a652"
	I0327 16:45:07.281908    8800 logs.go:123] Gathering logs for etcd [4f67bfb30b49] ...
	I0327 16:45:07.281919    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f67bfb30b49"
	I0327 16:45:07.295456    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:45:07.295466    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:45:07.319150    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:45:07.319157    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:45:07.353470    8800 logs.go:123] Gathering logs for etcd [5429db75cd75] ...
	I0327 16:45:07.353481    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5429db75cd75"
	I0327 16:45:07.367731    8800 logs.go:123] Gathering logs for kube-apiserver [126b9553cb40] ...
	I0327 16:45:07.367746    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126b9553cb40"
	I0327 16:45:07.394447    8800 logs.go:123] Gathering logs for kube-controller-manager [385805912c59] ...
	I0327 16:45:07.394460    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385805912c59"
	I0327 16:45:07.412717    8800 logs.go:123] Gathering logs for kube-proxy [2bfb39e97e1d] ...
	I0327 16:45:07.412727    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bfb39e97e1d"
	I0327 16:45:09.926247    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:45:14.928303    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:45:14.928444    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:45:14.943803    8800 logs.go:276] 2 containers: [0703d422a652 126b9553cb40]
	I0327 16:45:14.943888    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:45:14.956333    8800 logs.go:276] 2 containers: [4f67bfb30b49 5429db75cd75]
	I0327 16:45:14.956414    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:45:14.966599    8800 logs.go:276] 1 containers: [2ef14eacffaa]
	I0327 16:45:14.966661    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:45:14.977174    8800 logs.go:276] 2 containers: [b68fec8d8767 c0c992e34b5d]
	I0327 16:45:14.977244    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:45:14.987092    8800 logs.go:276] 1 containers: [2bfb39e97e1d]
	I0327 16:45:14.987157    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:45:14.997128    8800 logs.go:276] 2 containers: [385805912c59 1a8c10b8da56]
	I0327 16:45:14.997197    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:45:15.007246    8800 logs.go:276] 0 containers: []
	W0327 16:45:15.007261    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:45:15.007322    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:45:15.018155    8800 logs.go:276] 2 containers: [b4f5d0ba182b 579fe35ef076]
	I0327 16:45:15.018176    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:45:15.018181    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:45:15.055488    8800 logs.go:123] Gathering logs for etcd [5429db75cd75] ...
	I0327 16:45:15.055502    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5429db75cd75"
	I0327 16:45:15.070025    8800 logs.go:123] Gathering logs for kube-proxy [2bfb39e97e1d] ...
	I0327 16:45:15.070039    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bfb39e97e1d"
	I0327 16:45:15.081741    8800 logs.go:123] Gathering logs for kube-controller-manager [385805912c59] ...
	I0327 16:45:15.081751    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385805912c59"
	I0327 16:45:15.098684    8800 logs.go:123] Gathering logs for storage-provisioner [579fe35ef076] ...
	I0327 16:45:15.098698    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 579fe35ef076"
	I0327 16:45:15.116608    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:45:15.116618    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:45:15.141954    8800 logs.go:123] Gathering logs for kube-apiserver [0703d422a652] ...
	I0327 16:45:15.141964    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0703d422a652"
	I0327 16:45:15.155535    8800 logs.go:123] Gathering logs for etcd [4f67bfb30b49] ...
	I0327 16:45:15.155545    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f67bfb30b49"
	I0327 16:45:15.171195    8800 logs.go:123] Gathering logs for kube-controller-manager [1a8c10b8da56] ...
	I0327 16:45:15.171203    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a8c10b8da56"
	I0327 16:45:15.187004    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:45:15.187016    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:45:15.198938    8800 logs.go:123] Gathering logs for kube-scheduler [b68fec8d8767] ...
	I0327 16:45:15.198949    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68fec8d8767"
	I0327 16:45:15.210632    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:45:15.210642    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:45:15.247025    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:45:15.247041    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:45:15.251794    8800 logs.go:123] Gathering logs for kube-apiserver [126b9553cb40] ...
	I0327 16:45:15.251803    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126b9553cb40"
	I0327 16:45:15.270966    8800 logs.go:123] Gathering logs for coredns [2ef14eacffaa] ...
	I0327 16:45:15.270979    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef14eacffaa"
	I0327 16:45:15.282476    8800 logs.go:123] Gathering logs for kube-scheduler [c0c992e34b5d] ...
	I0327 16:45:15.282487    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0c992e34b5d"
	I0327 16:45:15.297295    8800 logs.go:123] Gathering logs for storage-provisioner [b4f5d0ba182b] ...
	I0327 16:45:15.297305    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f5d0ba182b"
	I0327 16:45:17.810610    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:45:22.813095    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:45:22.813492    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:45:22.843830    8800 logs.go:276] 2 containers: [0703d422a652 126b9553cb40]
	I0327 16:45:22.843961    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:45:22.863226    8800 logs.go:276] 2 containers: [4f67bfb30b49 5429db75cd75]
	I0327 16:45:22.863326    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:45:22.877291    8800 logs.go:276] 1 containers: [2ef14eacffaa]
	I0327 16:45:22.877366    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:45:22.889514    8800 logs.go:276] 2 containers: [b68fec8d8767 c0c992e34b5d]
	I0327 16:45:22.889580    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:45:22.900865    8800 logs.go:276] 1 containers: [2bfb39e97e1d]
	I0327 16:45:22.900956    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:45:22.917172    8800 logs.go:276] 2 containers: [385805912c59 1a8c10b8da56]
	I0327 16:45:22.917246    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:45:22.927991    8800 logs.go:276] 0 containers: []
	W0327 16:45:22.928003    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:45:22.928059    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:45:22.938382    8800 logs.go:276] 2 containers: [b4f5d0ba182b 579fe35ef076]
	I0327 16:45:22.938399    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:45:22.938404    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:45:22.955198    8800 logs.go:123] Gathering logs for kube-apiserver [0703d422a652] ...
	I0327 16:45:22.955208    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0703d422a652"
	I0327 16:45:22.969291    8800 logs.go:123] Gathering logs for etcd [4f67bfb30b49] ...
	I0327 16:45:22.969301    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f67bfb30b49"
	I0327 16:45:22.982915    8800 logs.go:123] Gathering logs for etcd [5429db75cd75] ...
	I0327 16:45:22.982927    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5429db75cd75"
	I0327 16:45:22.997102    8800 logs.go:123] Gathering logs for kube-scheduler [b68fec8d8767] ...
	I0327 16:45:22.997113    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68fec8d8767"
	I0327 16:45:23.008823    8800 logs.go:123] Gathering logs for kube-scheduler [c0c992e34b5d] ...
	I0327 16:45:23.008834    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0c992e34b5d"
	I0327 16:45:23.024338    8800 logs.go:123] Gathering logs for kube-controller-manager [385805912c59] ...
	I0327 16:45:23.024351    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385805912c59"
	I0327 16:45:23.042048    8800 logs.go:123] Gathering logs for storage-provisioner [b4f5d0ba182b] ...
	I0327 16:45:23.042059    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f5d0ba182b"
	I0327 16:45:23.054164    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:45:23.054177    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:45:23.059182    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:45:23.059188    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:45:23.094349    8800 logs.go:123] Gathering logs for kube-apiserver [126b9553cb40] ...
	I0327 16:45:23.094359    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126b9553cb40"
	I0327 16:45:23.115082    8800 logs.go:123] Gathering logs for coredns [2ef14eacffaa] ...
	I0327 16:45:23.115091    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef14eacffaa"
	I0327 16:45:23.130494    8800 logs.go:123] Gathering logs for kube-controller-manager [1a8c10b8da56] ...
	I0327 16:45:23.130503    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a8c10b8da56"
	I0327 16:45:23.145266    8800 logs.go:123] Gathering logs for storage-provisioner [579fe35ef076] ...
	I0327 16:45:23.145274    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 579fe35ef076"
	I0327 16:45:23.156896    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:45:23.156909    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:45:23.179326    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:45:23.179333    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:45:23.216009    8800 logs.go:123] Gathering logs for kube-proxy [2bfb39e97e1d] ...
	I0327 16:45:23.216019    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bfb39e97e1d"
	I0327 16:45:25.730334    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:45:30.730641    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:45:30.730802    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:45:30.745315    8800 logs.go:276] 2 containers: [0703d422a652 126b9553cb40]
	I0327 16:45:30.745389    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:45:30.760521    8800 logs.go:276] 2 containers: [4f67bfb30b49 5429db75cd75]
	I0327 16:45:30.760593    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:45:30.771151    8800 logs.go:276] 1 containers: [2ef14eacffaa]
	I0327 16:45:30.771219    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:45:30.781349    8800 logs.go:276] 2 containers: [b68fec8d8767 c0c992e34b5d]
	I0327 16:45:30.781418    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:45:30.791662    8800 logs.go:276] 1 containers: [2bfb39e97e1d]
	I0327 16:45:30.791732    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:45:30.802422    8800 logs.go:276] 2 containers: [385805912c59 1a8c10b8da56]
	I0327 16:45:30.802497    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:45:30.815784    8800 logs.go:276] 0 containers: []
	W0327 16:45:30.815796    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:45:30.815860    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:45:30.826292    8800 logs.go:276] 2 containers: [b4f5d0ba182b 579fe35ef076]
	I0327 16:45:30.826311    8800 logs.go:123] Gathering logs for kube-scheduler [b68fec8d8767] ...
	I0327 16:45:30.826316    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68fec8d8767"
	I0327 16:45:30.838561    8800 logs.go:123] Gathering logs for kube-scheduler [c0c992e34b5d] ...
	I0327 16:45:30.838574    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0c992e34b5d"
	I0327 16:45:30.853744    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:45:30.853754    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:45:30.865754    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:45:30.865765    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:45:30.901285    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:45:30.901297    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:45:30.905825    8800 logs.go:123] Gathering logs for etcd [5429db75cd75] ...
	I0327 16:45:30.905833    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5429db75cd75"
	I0327 16:45:30.920460    8800 logs.go:123] Gathering logs for coredns [2ef14eacffaa] ...
	I0327 16:45:30.920471    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef14eacffaa"
	I0327 16:45:30.932235    8800 logs.go:123] Gathering logs for kube-proxy [2bfb39e97e1d] ...
	I0327 16:45:30.932247    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bfb39e97e1d"
	I0327 16:45:30.943515    8800 logs.go:123] Gathering logs for storage-provisioner [b4f5d0ba182b] ...
	I0327 16:45:30.943525    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f5d0ba182b"
	I0327 16:45:30.954664    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:45:30.954674    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:45:30.990168    8800 logs.go:123] Gathering logs for kube-apiserver [126b9553cb40] ...
	I0327 16:45:30.990179    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126b9553cb40"
	I0327 16:45:31.009255    8800 logs.go:123] Gathering logs for storage-provisioner [579fe35ef076] ...
	I0327 16:45:31.009270    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 579fe35ef076"
	I0327 16:45:31.020735    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:45:31.020745    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:45:31.043527    8800 logs.go:123] Gathering logs for kube-apiserver [0703d422a652] ...
	I0327 16:45:31.043536    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0703d422a652"
	I0327 16:45:31.059253    8800 logs.go:123] Gathering logs for etcd [4f67bfb30b49] ...
	I0327 16:45:31.059263    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f67bfb30b49"
	I0327 16:45:31.074438    8800 logs.go:123] Gathering logs for kube-controller-manager [385805912c59] ...
	I0327 16:45:31.074446    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385805912c59"
	I0327 16:45:31.092373    8800 logs.go:123] Gathering logs for kube-controller-manager [1a8c10b8da56] ...
	I0327 16:45:31.092382    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a8c10b8da56"
	I0327 16:45:33.610103    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:45:38.610598    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:45:38.610690    8800 kubeadm.go:591] duration metric: took 4m4.356742292s to restartPrimaryControlPlane
	W0327 16:45:38.610773    8800 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0327 16:45:38.610807    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0327 16:45:39.603634    8800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0327 16:45:39.608463    8800 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0327 16:45:39.611295    8800 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0327 16:45:39.613968    8800 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0327 16:45:39.613974    8800 kubeadm.go:156] found existing configuration files:
	
	I0327 16:45:39.613997    8800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51212 /etc/kubernetes/admin.conf
	I0327 16:45:39.616552    8800 kubeadm.go:162] "https://control-plane.minikube.internal:51212" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51212 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0327 16:45:39.616574    8800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0327 16:45:39.618952    8800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51212 /etc/kubernetes/kubelet.conf
	I0327 16:45:39.621729    8800 kubeadm.go:162] "https://control-plane.minikube.internal:51212" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51212 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0327 16:45:39.621749    8800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0327 16:45:39.624674    8800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51212 /etc/kubernetes/controller-manager.conf
	I0327 16:45:39.627140    8800 kubeadm.go:162] "https://control-plane.minikube.internal:51212" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51212 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0327 16:45:39.627160    8800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0327 16:45:39.629830    8800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51212 /etc/kubernetes/scheduler.conf
	I0327 16:45:39.633382    8800 kubeadm.go:162] "https://control-plane.minikube.internal:51212" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51212 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0327 16:45:39.633437    8800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0327 16:45:39.636958    8800 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0327 16:45:39.656269    8800 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0327 16:45:39.656298    8800 kubeadm.go:309] [preflight] Running pre-flight checks
	I0327 16:45:39.716195    8800 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0327 16:45:39.716251    8800 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0327 16:45:39.716298    8800 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0327 16:45:39.765604    8800 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0327 16:45:39.773631    8800 out.go:204]   - Generating certificates and keys ...
	I0327 16:45:39.773665    8800 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0327 16:45:39.773700    8800 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0327 16:45:39.773745    8800 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0327 16:45:39.773844    8800 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0327 16:45:39.773931    8800 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0327 16:45:39.773990    8800 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0327 16:45:39.774161    8800 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0327 16:45:39.774289    8800 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0327 16:45:39.774411    8800 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0327 16:45:39.774454    8800 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0327 16:45:39.774479    8800 kubeadm.go:309] [certs] Using the existing "sa" key
	I0327 16:45:39.774509    8800 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0327 16:45:39.864151    8800 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0327 16:45:40.053392    8800 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0327 16:45:40.133239    8800 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0327 16:45:40.216430    8800 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0327 16:45:40.247916    8800 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0327 16:45:40.248606    8800 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0327 16:45:40.248647    8800 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0327 16:45:40.320023    8800 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0327 16:45:40.324355    8800 out.go:204]   - Booting up control plane ...
	I0327 16:45:40.324403    8800 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0327 16:45:40.324451    8800 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0327 16:45:40.325574    8800 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0327 16:45:40.325618    8800 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0327 16:45:40.325730    8800 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0327 16:45:44.830820    8800 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.508999 seconds
	I0327 16:45:44.830941    8800 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0327 16:45:44.836767    8800 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0327 16:45:45.351288    8800 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0327 16:45:45.351677    8800 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-400000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0327 16:45:45.855964    8800 kubeadm.go:309] [bootstrap-token] Using token: 3t2mm1.7phrwooo7ncwiu6l
	I0327 16:45:45.859911    8800 out.go:204]   - Configuring RBAC rules ...
	I0327 16:45:45.859974    8800 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0327 16:45:45.860026    8800 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0327 16:45:45.865274    8800 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0327 16:45:45.866092    8800 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0327 16:45:45.867030    8800 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0327 16:45:45.867763    8800 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0327 16:45:45.875239    8800 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0327 16:45:46.060352    8800 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0327 16:45:46.260141    8800 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0327 16:45:46.260707    8800 kubeadm.go:309] 
	I0327 16:45:46.260740    8800 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0327 16:45:46.260743    8800 kubeadm.go:309] 
	I0327 16:45:46.260786    8800 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0327 16:45:46.260789    8800 kubeadm.go:309] 
	I0327 16:45:46.260802    8800 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0327 16:45:46.260831    8800 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0327 16:45:46.260936    8800 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0327 16:45:46.260940    8800 kubeadm.go:309] 
	I0327 16:45:46.260969    8800 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0327 16:45:46.260975    8800 kubeadm.go:309] 
	I0327 16:45:46.261003    8800 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0327 16:45:46.261006    8800 kubeadm.go:309] 
	I0327 16:45:46.261039    8800 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0327 16:45:46.261096    8800 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0327 16:45:46.261144    8800 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0327 16:45:46.261150    8800 kubeadm.go:309] 
	I0327 16:45:46.261209    8800 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0327 16:45:46.261255    8800 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0327 16:45:46.261260    8800 kubeadm.go:309] 
	I0327 16:45:46.261303    8800 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 3t2mm1.7phrwooo7ncwiu6l \
	I0327 16:45:46.261355    8800 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8047b7e049f0384af96cc555849ef1f992fa8884768aff95c9a460200a82d884 \
	I0327 16:45:46.261370    8800 kubeadm.go:309] 	--control-plane 
	I0327 16:45:46.261372    8800 kubeadm.go:309] 
	I0327 16:45:46.261415    8800 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0327 16:45:46.261421    8800 kubeadm.go:309] 
	I0327 16:45:46.261458    8800 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 3t2mm1.7phrwooo7ncwiu6l \
	I0327 16:45:46.261507    8800 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8047b7e049f0384af96cc555849ef1f992fa8884768aff95c9a460200a82d884 
	I0327 16:45:46.261559    8800 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0327 16:45:46.261566    8800 cni.go:84] Creating CNI manager for ""
	I0327 16:45:46.261574    8800 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 16:45:46.265476    8800 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0327 16:45:46.271371    8800 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0327 16:45:46.274566    8800 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0327 16:45:46.279650    8800 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0327 16:45:46.279689    8800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 16:45:46.279714    8800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-400000 minikube.k8s.io/updated_at=2024_03_27T16_45_46_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d minikube.k8s.io/name=running-upgrade-400000 minikube.k8s.io/primary=true
	I0327 16:45:46.335060    8800 kubeadm.go:1107] duration metric: took 55.414375ms to wait for elevateKubeSystemPrivileges
	I0327 16:45:46.335079    8800 ops.go:34] apiserver oom_adj: -16
	W0327 16:45:46.335085    8800 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0327 16:45:46.335087    8800 kubeadm.go:393] duration metric: took 4m12.096700125s to StartCluster
	I0327 16:45:46.335098    8800 settings.go:142] acquiring lock: {Name:mk7a184fa834ec55a805b998fd083319e6561206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 16:45:46.335261    8800 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:45:46.335694    8800 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18485-6511/kubeconfig: {Name:mke46d0809919cfbe0118c5110926d6ce61bf373 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 16:45:46.335905    8800 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 16:45:46.340385    8800 out.go:177] * Verifying Kubernetes components...
	I0327 16:45:46.335926    8800 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0327 16:45:46.336091    8800 config.go:182] Loaded profile config "running-upgrade-400000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 16:45:46.347430    8800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 16:45:46.347434    8800 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-400000"
	I0327 16:45:46.347434    8800 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-400000"
	I0327 16:45:46.347470    8800 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-400000"
	I0327 16:45:46.347475    8800 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-400000"
	W0327 16:45:46.347481    8800 addons.go:243] addon storage-provisioner should already be in state true
	I0327 16:45:46.347500    8800 host.go:66] Checking if "running-upgrade-400000" exists ...
	I0327 16:45:46.348587    8800 kapi.go:59] client config for running-upgrade-400000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/running-upgrade-400000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/running-upgrade-400000/client.key", CAFile:"/Users/jenkins/minikube-integration/18485-6511/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1043e6c70), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0327 16:45:46.349259    8800 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-400000"
	W0327 16:45:46.349266    8800 addons.go:243] addon default-storageclass should already be in state true
	I0327 16:45:46.349273    8800 host.go:66] Checking if "running-upgrade-400000" exists ...
	I0327 16:45:46.354366    8800 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 16:45:46.360356    8800 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 16:45:46.360365    8800 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0327 16:45:46.360374    8800 sshutil.go:53] new ssh client: &{IP:localhost Port:51180 SSHKeyPath:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/running-upgrade-400000/id_rsa Username:docker}
	I0327 16:45:46.361215    8800 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0327 16:45:46.361222    8800 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0327 16:45:46.361226    8800 sshutil.go:53] new ssh client: &{IP:localhost Port:51180 SSHKeyPath:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/running-upgrade-400000/id_rsa Username:docker}
	I0327 16:45:46.425279    8800 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 16:45:46.430397    8800 api_server.go:52] waiting for apiserver process to appear ...
	I0327 16:45:46.430448    8800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 16:45:46.434470    8800 api_server.go:72] duration metric: took 98.573792ms to wait for apiserver process to appear ...
	I0327 16:45:46.434478    8800 api_server.go:88] waiting for apiserver healthz status ...
	I0327 16:45:46.434485    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:45:46.439349    8800 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 16:45:46.443169    8800 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0327 16:45:51.434444    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:45:51.434464    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:45:56.434953    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:45:56.435000    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:46:01.434679    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:46:01.434700    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:46:06.434549    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:46:06.434603    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:46:11.434627    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:46:11.434674    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:46:16.434934    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:46:16.434955    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0327 16:46:16.795457    8800 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0327 16:46:16.799491    8800 out.go:177] * Enabled addons: storage-provisioner
	I0327 16:46:16.807265    8800 addons.go:505] duration metric: took 30.474683125s for enable addons: enabled=[storage-provisioner]
	I0327 16:46:21.435369    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:46:21.435398    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:46:26.435996    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:46:26.436040    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:46:31.436898    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:46:31.436934    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:46:36.437442    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:46:36.437474    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:46:41.438786    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:46:41.438832    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:46:46.440664    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:46:46.440769    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:46:46.469825    8800 logs.go:276] 1 containers: [a8b68fa373a2]
	I0327 16:46:46.469901    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:46:46.480706    8800 logs.go:276] 1 containers: [b2b36dcbf471]
	I0327 16:46:46.480783    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:46:46.491410    8800 logs.go:276] 2 containers: [a1738554adca b5329cb28332]
	I0327 16:46:46.491483    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:46:46.502307    8800 logs.go:276] 1 containers: [5aa5f8fb90cc]
	I0327 16:46:46.502372    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:46:46.513008    8800 logs.go:276] 1 containers: [4195d96c1f8a]
	I0327 16:46:46.513079    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:46:46.524245    8800 logs.go:276] 1 containers: [faf151c9cff5]
	I0327 16:46:46.524305    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:46:46.534292    8800 logs.go:276] 0 containers: []
	W0327 16:46:46.534301    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:46:46.534351    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:46:46.544748    8800 logs.go:276] 1 containers: [2c4ccf3e69ae]
	I0327 16:46:46.544767    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:46:46.544771    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:46:46.549678    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:46:46.549687    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:46:46.586858    8800 logs.go:123] Gathering logs for kube-apiserver [a8b68fa373a2] ...
	I0327 16:46:46.586877    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b68fa373a2"
	I0327 16:46:46.601846    8800 logs.go:123] Gathering logs for etcd [b2b36dcbf471] ...
	I0327 16:46:46.601861    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2b36dcbf471"
	I0327 16:46:46.616587    8800 logs.go:123] Gathering logs for coredns [b5329cb28332] ...
	I0327 16:46:46.616597    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5329cb28332"
	I0327 16:46:46.628426    8800 logs.go:123] Gathering logs for kube-scheduler [5aa5f8fb90cc] ...
	I0327 16:46:46.628435    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa5f8fb90cc"
	I0327 16:46:46.642676    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:46:46.642689    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:46:46.654217    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:46:46.654228    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:46:46.688536    8800 logs.go:123] Gathering logs for coredns [a1738554adca] ...
	I0327 16:46:46.688544    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1738554adca"
	I0327 16:46:46.703398    8800 logs.go:123] Gathering logs for kube-proxy [4195d96c1f8a] ...
	I0327 16:46:46.703409    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4195d96c1f8a"
	I0327 16:46:46.715024    8800 logs.go:123] Gathering logs for kube-controller-manager [faf151c9cff5] ...
	I0327 16:46:46.715036    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf151c9cff5"
	I0327 16:46:46.732459    8800 logs.go:123] Gathering logs for storage-provisioner [2c4ccf3e69ae] ...
	I0327 16:46:46.732476    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4ccf3e69ae"
	I0327 16:46:46.747358    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:46:46.747368    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:46:49.273734    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:46:54.276159    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:46:54.276334    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:46:54.295758    8800 logs.go:276] 1 containers: [a8b68fa373a2]
	I0327 16:46:54.295849    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:46:54.314251    8800 logs.go:276] 1 containers: [b2b36dcbf471]
	I0327 16:46:54.314330    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:46:54.325861    8800 logs.go:276] 2 containers: [a1738554adca b5329cb28332]
	I0327 16:46:54.325930    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:46:54.342445    8800 logs.go:276] 1 containers: [5aa5f8fb90cc]
	I0327 16:46:54.342510    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:46:54.353408    8800 logs.go:276] 1 containers: [4195d96c1f8a]
	I0327 16:46:54.353496    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:46:54.364202    8800 logs.go:276] 1 containers: [faf151c9cff5]
	I0327 16:46:54.364274    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:46:54.374788    8800 logs.go:276] 0 containers: []
	W0327 16:46:54.374799    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:46:54.374859    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:46:54.385190    8800 logs.go:276] 1 containers: [2c4ccf3e69ae]
	I0327 16:46:54.385203    8800 logs.go:123] Gathering logs for coredns [b5329cb28332] ...
	I0327 16:46:54.385209    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5329cb28332"
	I0327 16:46:54.397135    8800 logs.go:123] Gathering logs for kube-controller-manager [faf151c9cff5] ...
	I0327 16:46:54.397148    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf151c9cff5"
	I0327 16:46:54.414700    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:46:54.414711    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:46:54.439166    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:46:54.439174    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:46:54.451212    8800 logs.go:123] Gathering logs for kube-apiserver [a8b68fa373a2] ...
	I0327 16:46:54.451221    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b68fa373a2"
	I0327 16:46:54.465675    8800 logs.go:123] Gathering logs for etcd [b2b36dcbf471] ...
	I0327 16:46:54.465686    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2b36dcbf471"
	I0327 16:46:54.479603    8800 logs.go:123] Gathering logs for coredns [a1738554adca] ...
	I0327 16:46:54.479615    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1738554adca"
	I0327 16:46:54.491905    8800 logs.go:123] Gathering logs for kube-scheduler [5aa5f8fb90cc] ...
	I0327 16:46:54.491917    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa5f8fb90cc"
	I0327 16:46:54.506952    8800 logs.go:123] Gathering logs for kube-proxy [4195d96c1f8a] ...
	I0327 16:46:54.506967    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4195d96c1f8a"
	I0327 16:46:54.520621    8800 logs.go:123] Gathering logs for storage-provisioner [2c4ccf3e69ae] ...
	I0327 16:46:54.520631    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4ccf3e69ae"
	I0327 16:46:54.535120    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:46:54.535131    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:46:54.571106    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:46:54.571119    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:46:54.575665    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:46:54.575674    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:46:57.114178    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:47:02.115198    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:47:02.115481    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:47:02.145718    8800 logs.go:276] 1 containers: [a8b68fa373a2]
	I0327 16:47:02.145835    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:47:02.161724    8800 logs.go:276] 1 containers: [b2b36dcbf471]
	I0327 16:47:02.161810    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:47:02.174535    8800 logs.go:276] 2 containers: [a1738554adca b5329cb28332]
	I0327 16:47:02.174607    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:47:02.186176    8800 logs.go:276] 1 containers: [5aa5f8fb90cc]
	I0327 16:47:02.186242    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:47:02.196737    8800 logs.go:276] 1 containers: [4195d96c1f8a]
	I0327 16:47:02.196800    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:47:02.206966    8800 logs.go:276] 1 containers: [faf151c9cff5]
	I0327 16:47:02.207036    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:47:02.216967    8800 logs.go:276] 0 containers: []
	W0327 16:47:02.216981    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:47:02.217047    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:47:02.228020    8800 logs.go:276] 1 containers: [2c4ccf3e69ae]
	I0327 16:47:02.228036    8800 logs.go:123] Gathering logs for coredns [b5329cb28332] ...
	I0327 16:47:02.228041    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5329cb28332"
	I0327 16:47:02.239602    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:47:02.239617    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:47:02.263540    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:47:02.263547    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:47:02.275010    8800 logs.go:123] Gathering logs for etcd [b2b36dcbf471] ...
	I0327 16:47:02.275022    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2b36dcbf471"
	I0327 16:47:02.288728    8800 logs.go:123] Gathering logs for coredns [a1738554adca] ...
	I0327 16:47:02.288739    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1738554adca"
	I0327 16:47:02.300422    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:47:02.300434    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:47:02.337378    8800 logs.go:123] Gathering logs for kube-apiserver [a8b68fa373a2] ...
	I0327 16:47:02.337388    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b68fa373a2"
	I0327 16:47:02.351890    8800 logs.go:123] Gathering logs for kube-scheduler [5aa5f8fb90cc] ...
	I0327 16:47:02.351902    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa5f8fb90cc"
	I0327 16:47:02.366095    8800 logs.go:123] Gathering logs for kube-proxy [4195d96c1f8a] ...
	I0327 16:47:02.366104    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4195d96c1f8a"
	I0327 16:47:02.378108    8800 logs.go:123] Gathering logs for kube-controller-manager [faf151c9cff5] ...
	I0327 16:47:02.378121    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf151c9cff5"
	I0327 16:47:02.395177    8800 logs.go:123] Gathering logs for storage-provisioner [2c4ccf3e69ae] ...
	I0327 16:47:02.395187    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4ccf3e69ae"
	I0327 16:47:02.406870    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:47:02.406883    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:47:02.439913    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:47:02.439926    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:47:04.946782    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:47:09.947410    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:47:09.947605    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:47:09.970760    8800 logs.go:276] 1 containers: [a8b68fa373a2]
	I0327 16:47:09.970880    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:47:09.986958    8800 logs.go:276] 1 containers: [b2b36dcbf471]
	I0327 16:47:09.987042    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:47:10.000609    8800 logs.go:276] 2 containers: [a1738554adca b5329cb28332]
	I0327 16:47:10.000681    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:47:10.014171    8800 logs.go:276] 1 containers: [5aa5f8fb90cc]
	I0327 16:47:10.014238    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:47:10.024368    8800 logs.go:276] 1 containers: [4195d96c1f8a]
	I0327 16:47:10.024429    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:47:10.035355    8800 logs.go:276] 1 containers: [faf151c9cff5]
	I0327 16:47:10.035421    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:47:10.045543    8800 logs.go:276] 0 containers: []
	W0327 16:47:10.045553    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:47:10.045601    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:47:10.056097    8800 logs.go:276] 1 containers: [2c4ccf3e69ae]
	I0327 16:47:10.056113    8800 logs.go:123] Gathering logs for storage-provisioner [2c4ccf3e69ae] ...
	I0327 16:47:10.056118    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4ccf3e69ae"
	I0327 16:47:10.067743    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:47:10.067753    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:47:10.079121    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:47:10.079133    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:47:10.112553    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:47:10.112561    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:47:10.152231    8800 logs.go:123] Gathering logs for kube-apiserver [a8b68fa373a2] ...
	I0327 16:47:10.152243    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b68fa373a2"
	I0327 16:47:10.166817    8800 logs.go:123] Gathering logs for etcd [b2b36dcbf471] ...
	I0327 16:47:10.166829    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2b36dcbf471"
	I0327 16:47:10.180394    8800 logs.go:123] Gathering logs for coredns [a1738554adca] ...
	I0327 16:47:10.180404    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1738554adca"
	I0327 16:47:10.192268    8800 logs.go:123] Gathering logs for kube-controller-manager [faf151c9cff5] ...
	I0327 16:47:10.192278    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf151c9cff5"
	I0327 16:47:10.210194    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:47:10.210205    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:47:10.214797    8800 logs.go:123] Gathering logs for coredns [b5329cb28332] ...
	I0327 16:47:10.214803    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5329cb28332"
	I0327 16:47:10.226211    8800 logs.go:123] Gathering logs for kube-scheduler [5aa5f8fb90cc] ...
	I0327 16:47:10.226222    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa5f8fb90cc"
	I0327 16:47:10.240529    8800 logs.go:123] Gathering logs for kube-proxy [4195d96c1f8a] ...
	I0327 16:47:10.240542    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4195d96c1f8a"
	I0327 16:47:10.251992    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:47:10.252002    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:47:12.778068    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:47:17.780190    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:47:17.780356    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:47:17.796369    8800 logs.go:276] 1 containers: [a8b68fa373a2]
	I0327 16:47:17.796462    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:47:17.809195    8800 logs.go:276] 1 containers: [b2b36dcbf471]
	I0327 16:47:17.809267    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:47:17.820185    8800 logs.go:276] 2 containers: [a1738554adca b5329cb28332]
	I0327 16:47:17.820261    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:47:17.830876    8800 logs.go:276] 1 containers: [5aa5f8fb90cc]
	I0327 16:47:17.830941    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:47:17.841281    8800 logs.go:276] 1 containers: [4195d96c1f8a]
	I0327 16:47:17.841351    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:47:17.852254    8800 logs.go:276] 1 containers: [faf151c9cff5]
	I0327 16:47:17.852324    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:47:17.864779    8800 logs.go:276] 0 containers: []
	W0327 16:47:17.864792    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:47:17.864852    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:47:17.875637    8800 logs.go:276] 1 containers: [2c4ccf3e69ae]
	I0327 16:47:17.875652    8800 logs.go:123] Gathering logs for kube-scheduler [5aa5f8fb90cc] ...
	I0327 16:47:17.875657    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa5f8fb90cc"
	I0327 16:47:17.890006    8800 logs.go:123] Gathering logs for kube-proxy [4195d96c1f8a] ...
	I0327 16:47:17.890017    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4195d96c1f8a"
	I0327 16:47:17.901560    8800 logs.go:123] Gathering logs for storage-provisioner [2c4ccf3e69ae] ...
	I0327 16:47:17.901571    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4ccf3e69ae"
	I0327 16:47:17.913464    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:47:17.913475    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:47:17.937644    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:47:17.937656    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:47:17.949387    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:47:17.949397    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:47:17.983604    8800 logs.go:123] Gathering logs for kube-apiserver [a8b68fa373a2] ...
	I0327 16:47:17.983615    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b68fa373a2"
	I0327 16:47:17.997970    8800 logs.go:123] Gathering logs for etcd [b2b36dcbf471] ...
	I0327 16:47:17.997980    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2b36dcbf471"
	I0327 16:47:18.013860    8800 logs.go:123] Gathering logs for coredns [b5329cb28332] ...
	I0327 16:47:18.013873    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5329cb28332"
	I0327 16:47:18.024915    8800 logs.go:123] Gathering logs for kube-controller-manager [faf151c9cff5] ...
	I0327 16:47:18.024927    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf151c9cff5"
	I0327 16:47:18.047410    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:47:18.047422    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:47:18.080346    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:47:18.080354    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:47:18.084838    8800 logs.go:123] Gathering logs for coredns [a1738554adca] ...
	I0327 16:47:18.084844    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1738554adca"
	I0327 16:47:20.598233    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:47:25.600439    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:47:25.600574    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:47:25.613506    8800 logs.go:276] 1 containers: [a8b68fa373a2]
	I0327 16:47:25.613579    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:47:25.623781    8800 logs.go:276] 1 containers: [b2b36dcbf471]
	I0327 16:47:25.623847    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:47:25.634207    8800 logs.go:276] 2 containers: [a1738554adca b5329cb28332]
	I0327 16:47:25.634275    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:47:25.644814    8800 logs.go:276] 1 containers: [5aa5f8fb90cc]
	I0327 16:47:25.644879    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:47:25.655311    8800 logs.go:276] 1 containers: [4195d96c1f8a]
	I0327 16:47:25.655378    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:47:25.668449    8800 logs.go:276] 1 containers: [faf151c9cff5]
	I0327 16:47:25.668515    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:47:25.678659    8800 logs.go:276] 0 containers: []
	W0327 16:47:25.678672    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:47:25.678729    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:47:25.688681    8800 logs.go:276] 1 containers: [2c4ccf3e69ae]
	I0327 16:47:25.688697    8800 logs.go:123] Gathering logs for kube-scheduler [5aa5f8fb90cc] ...
	I0327 16:47:25.688703    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa5f8fb90cc"
	I0327 16:47:25.703273    8800 logs.go:123] Gathering logs for storage-provisioner [2c4ccf3e69ae] ...
	I0327 16:47:25.703285    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4ccf3e69ae"
	I0327 16:47:25.715331    8800 logs.go:123] Gathering logs for coredns [b5329cb28332] ...
	I0327 16:47:25.715343    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5329cb28332"
	I0327 16:47:25.730140    8800 logs.go:123] Gathering logs for kube-proxy [4195d96c1f8a] ...
	I0327 16:47:25.730152    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4195d96c1f8a"
	I0327 16:47:25.743150    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:47:25.743165    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:47:25.778188    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:47:25.778196    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:47:25.782547    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:47:25.782556    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:47:25.853381    8800 logs.go:123] Gathering logs for kube-apiserver [a8b68fa373a2] ...
	I0327 16:47:25.853392    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b68fa373a2"
	I0327 16:47:25.867864    8800 logs.go:123] Gathering logs for etcd [b2b36dcbf471] ...
	I0327 16:47:25.867875    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2b36dcbf471"
	I0327 16:47:25.882144    8800 logs.go:123] Gathering logs for coredns [a1738554adca] ...
	I0327 16:47:25.882154    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1738554adca"
	I0327 16:47:25.893531    8800 logs.go:123] Gathering logs for kube-controller-manager [faf151c9cff5] ...
	I0327 16:47:25.893540    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf151c9cff5"
	I0327 16:47:25.911093    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:47:25.911102    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:47:25.937597    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:47:25.937606    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:47:28.450853    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:47:33.451193    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:47:33.451569    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:47:33.486125    8800 logs.go:276] 1 containers: [a8b68fa373a2]
	I0327 16:47:33.486267    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:47:33.507698    8800 logs.go:276] 1 containers: [b2b36dcbf471]
	I0327 16:47:33.507814    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:47:33.522756    8800 logs.go:276] 2 containers: [a1738554adca b5329cb28332]
	I0327 16:47:33.522852    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:47:33.535131    8800 logs.go:276] 1 containers: [5aa5f8fb90cc]
	I0327 16:47:33.535203    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:47:33.545641    8800 logs.go:276] 1 containers: [4195d96c1f8a]
	I0327 16:47:33.545709    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:47:33.556093    8800 logs.go:276] 1 containers: [faf151c9cff5]
	I0327 16:47:33.556159    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:47:33.566470    8800 logs.go:276] 0 containers: []
	W0327 16:47:33.566481    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:47:33.566537    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:47:33.577045    8800 logs.go:276] 1 containers: [2c4ccf3e69ae]
	I0327 16:47:33.577061    8800 logs.go:123] Gathering logs for kube-scheduler [5aa5f8fb90cc] ...
	I0327 16:47:33.577066    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa5f8fb90cc"
	I0327 16:47:33.591423    8800 logs.go:123] Gathering logs for kube-proxy [4195d96c1f8a] ...
	I0327 16:47:33.591434    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4195d96c1f8a"
	I0327 16:47:33.602994    8800 logs.go:123] Gathering logs for kube-controller-manager [faf151c9cff5] ...
	I0327 16:47:33.603005    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf151c9cff5"
	I0327 16:47:33.620407    8800 logs.go:123] Gathering logs for storage-provisioner [2c4ccf3e69ae] ...
	I0327 16:47:33.620417    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4ccf3e69ae"
	I0327 16:47:33.636096    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:47:33.636105    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:47:33.648973    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:47:33.648984    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:47:33.653741    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:47:33.653751    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:47:33.690725    8800 logs.go:123] Gathering logs for etcd [b2b36dcbf471] ...
	I0327 16:47:33.690735    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2b36dcbf471"
	I0327 16:47:33.704434    8800 logs.go:123] Gathering logs for coredns [a1738554adca] ...
	I0327 16:47:33.704444    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1738554adca"
	I0327 16:47:33.716061    8800 logs.go:123] Gathering logs for coredns [b5329cb28332] ...
	I0327 16:47:33.716072    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5329cb28332"
	I0327 16:47:33.728166    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:47:33.728177    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:47:33.751980    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:47:33.751993    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:47:33.785571    8800 logs.go:123] Gathering logs for kube-apiserver [a8b68fa373a2] ...
	I0327 16:47:33.785584    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b68fa373a2"
	I0327 16:47:36.301481    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:47:41.303590    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:47:41.303723    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:47:41.315141    8800 logs.go:276] 1 containers: [a8b68fa373a2]
	I0327 16:47:41.315220    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:47:41.326183    8800 logs.go:276] 1 containers: [b2b36dcbf471]
	I0327 16:47:41.326253    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:47:41.337350    8800 logs.go:276] 2 containers: [a1738554adca b5329cb28332]
	I0327 16:47:41.337417    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:47:41.347978    8800 logs.go:276] 1 containers: [5aa5f8fb90cc]
	I0327 16:47:41.348044    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:47:41.358877    8800 logs.go:276] 1 containers: [4195d96c1f8a]
	I0327 16:47:41.358943    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:47:41.369905    8800 logs.go:276] 1 containers: [faf151c9cff5]
	I0327 16:47:41.369971    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:47:41.379714    8800 logs.go:276] 0 containers: []
	W0327 16:47:41.379727    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:47:41.379784    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:47:41.390768    8800 logs.go:276] 1 containers: [2c4ccf3e69ae]
	I0327 16:47:41.390787    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:47:41.390792    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:47:41.426296    8800 logs.go:123] Gathering logs for kube-apiserver [a8b68fa373a2] ...
	I0327 16:47:41.426307    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b68fa373a2"
	I0327 16:47:41.441414    8800 logs.go:123] Gathering logs for coredns [b5329cb28332] ...
	I0327 16:47:41.441427    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5329cb28332"
	I0327 16:47:41.453151    8800 logs.go:123] Gathering logs for kube-scheduler [5aa5f8fb90cc] ...
	I0327 16:47:41.453161    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa5f8fb90cc"
	I0327 16:47:41.468106    8800 logs.go:123] Gathering logs for kube-proxy [4195d96c1f8a] ...
	I0327 16:47:41.468119    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4195d96c1f8a"
	I0327 16:47:41.479602    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:47:41.479613    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:47:41.504790    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:47:41.504799    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:47:41.517846    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:47:41.517856    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:47:41.523061    8800 logs.go:123] Gathering logs for etcd [b2b36dcbf471] ...
	I0327 16:47:41.523067    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2b36dcbf471"
	I0327 16:47:41.537191    8800 logs.go:123] Gathering logs for coredns [a1738554adca] ...
	I0327 16:47:41.537200    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1738554adca"
	I0327 16:47:41.548768    8800 logs.go:123] Gathering logs for kube-controller-manager [faf151c9cff5] ...
	I0327 16:47:41.548777    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf151c9cff5"
	I0327 16:47:41.566822    8800 logs.go:123] Gathering logs for storage-provisioner [2c4ccf3e69ae] ...
	I0327 16:47:41.566835    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4ccf3e69ae"
	I0327 16:47:41.578404    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:47:41.578414    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:47:44.113381    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:47:49.115488    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:47:49.115673    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:47:49.129793    8800 logs.go:276] 1 containers: [a8b68fa373a2]
	I0327 16:47:49.129868    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:47:49.141310    8800 logs.go:276] 1 containers: [b2b36dcbf471]
	I0327 16:47:49.141381    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:47:49.156625    8800 logs.go:276] 2 containers: [a1738554adca b5329cb28332]
	I0327 16:47:49.156692    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:47:49.169701    8800 logs.go:276] 1 containers: [5aa5f8fb90cc]
	I0327 16:47:49.169777    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:47:49.180042    8800 logs.go:276] 1 containers: [4195d96c1f8a]
	I0327 16:47:49.180118    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:47:49.190573    8800 logs.go:276] 1 containers: [faf151c9cff5]
	I0327 16:47:49.190641    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:47:49.200490    8800 logs.go:276] 0 containers: []
	W0327 16:47:49.200503    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:47:49.200566    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:47:49.211524    8800 logs.go:276] 1 containers: [2c4ccf3e69ae]
	I0327 16:47:49.211544    8800 logs.go:123] Gathering logs for kube-proxy [4195d96c1f8a] ...
	I0327 16:47:49.211549    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4195d96c1f8a"
	I0327 16:47:49.223399    8800 logs.go:123] Gathering logs for kube-controller-manager [faf151c9cff5] ...
	I0327 16:47:49.223410    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf151c9cff5"
	I0327 16:47:49.240988    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:47:49.241001    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:47:49.265122    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:47:49.265128    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:47:49.298148    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:47:49.298155    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:47:49.302431    8800 logs.go:123] Gathering logs for kube-apiserver [a8b68fa373a2] ...
	I0327 16:47:49.302436    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b68fa373a2"
	I0327 16:47:49.317133    8800 logs.go:123] Gathering logs for etcd [b2b36dcbf471] ...
	I0327 16:47:49.317148    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2b36dcbf471"
	I0327 16:47:49.331103    8800 logs.go:123] Gathering logs for coredns [b5329cb28332] ...
	I0327 16:47:49.331113    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5329cb28332"
	I0327 16:47:49.343022    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:47:49.343033    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:47:49.354698    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:47:49.354708    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:47:49.391002    8800 logs.go:123] Gathering logs for coredns [a1738554adca] ...
	I0327 16:47:49.391013    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1738554adca"
	I0327 16:47:49.402911    8800 logs.go:123] Gathering logs for kube-scheduler [5aa5f8fb90cc] ...
	I0327 16:47:49.402925    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa5f8fb90cc"
	I0327 16:47:49.423458    8800 logs.go:123] Gathering logs for storage-provisioner [2c4ccf3e69ae] ...
	I0327 16:47:49.423468    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4ccf3e69ae"
	I0327 16:47:51.936462    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:47:56.938340    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:47:56.938535    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:47:56.960306    8800 logs.go:276] 1 containers: [a8b68fa373a2]
	I0327 16:47:56.960402    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:47:56.980245    8800 logs.go:276] 1 containers: [b2b36dcbf471]
	I0327 16:47:56.980325    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:47:56.992629    8800 logs.go:276] 2 containers: [a1738554adca b5329cb28332]
	I0327 16:47:56.992701    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:47:57.005532    8800 logs.go:276] 1 containers: [5aa5f8fb90cc]
	I0327 16:47:57.005598    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:47:57.016183    8800 logs.go:276] 1 containers: [4195d96c1f8a]
	I0327 16:47:57.016260    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:47:57.026957    8800 logs.go:276] 1 containers: [faf151c9cff5]
	I0327 16:47:57.027023    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:47:57.037172    8800 logs.go:276] 0 containers: []
	W0327 16:47:57.037181    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:47:57.037235    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:47:57.047449    8800 logs.go:276] 1 containers: [2c4ccf3e69ae]
	I0327 16:47:57.047466    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:47:57.047471    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:47:57.072607    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:47:57.072622    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:47:57.084921    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:47:57.084936    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:47:57.089424    8800 logs.go:123] Gathering logs for kube-apiserver [a8b68fa373a2] ...
	I0327 16:47:57.089431    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b68fa373a2"
	I0327 16:47:57.103335    8800 logs.go:123] Gathering logs for coredns [b5329cb28332] ...
	I0327 16:47:57.103350    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5329cb28332"
	I0327 16:47:57.115374    8800 logs.go:123] Gathering logs for coredns [a1738554adca] ...
	I0327 16:47:57.115385    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1738554adca"
	I0327 16:47:57.129312    8800 logs.go:123] Gathering logs for kube-scheduler [5aa5f8fb90cc] ...
	I0327 16:47:57.129322    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa5f8fb90cc"
	I0327 16:47:57.151272    8800 logs.go:123] Gathering logs for kube-proxy [4195d96c1f8a] ...
	I0327 16:47:57.151283    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4195d96c1f8a"
	I0327 16:47:57.163024    8800 logs.go:123] Gathering logs for kube-controller-manager [faf151c9cff5] ...
	I0327 16:47:57.163035    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf151c9cff5"
	I0327 16:47:57.180390    8800 logs.go:123] Gathering logs for storage-provisioner [2c4ccf3e69ae] ...
	I0327 16:47:57.180399    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4ccf3e69ae"
	I0327 16:47:57.191667    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:47:57.191677    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:47:57.224705    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:47:57.224714    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:47:57.258955    8800 logs.go:123] Gathering logs for etcd [b2b36dcbf471] ...
	I0327 16:47:57.258968    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2b36dcbf471"
	I0327 16:47:59.775087    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:48:04.777534    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:48:04.777927    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:48:04.812541    8800 logs.go:276] 1 containers: [a8b68fa373a2]
	I0327 16:48:04.812687    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:48:04.833589    8800 logs.go:276] 1 containers: [b2b36dcbf471]
	I0327 16:48:04.833684    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:48:04.848870    8800 logs.go:276] 4 containers: [d8aeaa1c9b02 a88693a2d6c1 a1738554adca b5329cb28332]
	I0327 16:48:04.848951    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:48:04.861446    8800 logs.go:276] 1 containers: [5aa5f8fb90cc]
	I0327 16:48:04.861522    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:48:04.872326    8800 logs.go:276] 1 containers: [4195d96c1f8a]
	I0327 16:48:04.872395    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:48:04.882783    8800 logs.go:276] 1 containers: [faf151c9cff5]
	I0327 16:48:04.882844    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:48:04.892936    8800 logs.go:276] 0 containers: []
	W0327 16:48:04.892949    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:48:04.893010    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:48:04.906402    8800 logs.go:276] 1 containers: [2c4ccf3e69ae]
	I0327 16:48:04.906420    8800 logs.go:123] Gathering logs for coredns [d8aeaa1c9b02] ...
	I0327 16:48:04.906426    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8aeaa1c9b02"
	I0327 16:48:04.917699    8800 logs.go:123] Gathering logs for kube-controller-manager [faf151c9cff5] ...
	I0327 16:48:04.917711    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf151c9cff5"
	I0327 16:48:04.935588    8800 logs.go:123] Gathering logs for storage-provisioner [2c4ccf3e69ae] ...
	I0327 16:48:04.935599    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4ccf3e69ae"
	I0327 16:48:04.949030    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:48:04.949040    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:48:04.983432    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:48:04.983439    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:48:04.987782    8800 logs.go:123] Gathering logs for coredns [b5329cb28332] ...
	I0327 16:48:04.987790    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5329cb28332"
	I0327 16:48:05.005084    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:48:05.005097    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:48:05.030716    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:48:05.030726    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:48:05.066620    8800 logs.go:123] Gathering logs for kube-apiserver [a8b68fa373a2] ...
	I0327 16:48:05.066632    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b68fa373a2"
	I0327 16:48:05.080666    8800 logs.go:123] Gathering logs for etcd [b2b36dcbf471] ...
	I0327 16:48:05.080676    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2b36dcbf471"
	I0327 16:48:05.095700    8800 logs.go:123] Gathering logs for coredns [a88693a2d6c1] ...
	I0327 16:48:05.095710    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88693a2d6c1"
	I0327 16:48:05.109495    8800 logs.go:123] Gathering logs for coredns [a1738554adca] ...
	I0327 16:48:05.109506    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1738554adca"
	I0327 16:48:05.121498    8800 logs.go:123] Gathering logs for kube-scheduler [5aa5f8fb90cc] ...
	I0327 16:48:05.121508    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa5f8fb90cc"
	I0327 16:48:05.135477    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:48:05.135488    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:48:05.147299    8800 logs.go:123] Gathering logs for kube-proxy [4195d96c1f8a] ...
	I0327 16:48:05.147309    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4195d96c1f8a"
	I0327 16:48:07.660875    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:48:12.663343    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:48:12.663613    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:48:12.689161    8800 logs.go:276] 1 containers: [a8b68fa373a2]
	I0327 16:48:12.689339    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:48:12.706119    8800 logs.go:276] 1 containers: [b2b36dcbf471]
	I0327 16:48:12.706197    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:48:12.720032    8800 logs.go:276] 4 containers: [d8aeaa1c9b02 a88693a2d6c1 a1738554adca b5329cb28332]
	I0327 16:48:12.720102    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:48:12.731104    8800 logs.go:276] 1 containers: [5aa5f8fb90cc]
	I0327 16:48:12.731166    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:48:12.741617    8800 logs.go:276] 1 containers: [4195d96c1f8a]
	I0327 16:48:12.741685    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:48:12.756199    8800 logs.go:276] 1 containers: [faf151c9cff5]
	I0327 16:48:12.756265    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:48:12.766325    8800 logs.go:276] 0 containers: []
	W0327 16:48:12.766335    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:48:12.766394    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:48:12.776350    8800 logs.go:276] 1 containers: [2c4ccf3e69ae]
	I0327 16:48:12.776367    8800 logs.go:123] Gathering logs for etcd [b2b36dcbf471] ...
	I0327 16:48:12.776373    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2b36dcbf471"
	I0327 16:48:12.790590    8800 logs.go:123] Gathering logs for coredns [b5329cb28332] ...
	I0327 16:48:12.790601    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5329cb28332"
	I0327 16:48:12.801891    8800 logs.go:123] Gathering logs for storage-provisioner [2c4ccf3e69ae] ...
	I0327 16:48:12.801902    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4ccf3e69ae"
	I0327 16:48:12.813508    8800 logs.go:123] Gathering logs for kube-apiserver [a8b68fa373a2] ...
	I0327 16:48:12.813519    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b68fa373a2"
	I0327 16:48:12.828069    8800 logs.go:123] Gathering logs for kube-proxy [4195d96c1f8a] ...
	I0327 16:48:12.828079    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4195d96c1f8a"
	I0327 16:48:12.839407    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:48:12.839416    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:48:12.863424    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:48:12.863432    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:48:12.875457    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:48:12.875467    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:48:12.908945    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:48:12.908957    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:48:12.913709    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:48:12.913716    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:48:12.949973    8800 logs.go:123] Gathering logs for coredns [a1738554adca] ...
	I0327 16:48:12.949984    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1738554adca"
	I0327 16:48:12.962046    8800 logs.go:123] Gathering logs for kube-controller-manager [faf151c9cff5] ...
	I0327 16:48:12.962058    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf151c9cff5"
	I0327 16:48:12.982989    8800 logs.go:123] Gathering logs for coredns [d8aeaa1c9b02] ...
	I0327 16:48:12.982999    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8aeaa1c9b02"
	I0327 16:48:12.994791    8800 logs.go:123] Gathering logs for coredns [a88693a2d6c1] ...
	I0327 16:48:12.994805    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88693a2d6c1"
	I0327 16:48:13.013853    8800 logs.go:123] Gathering logs for kube-scheduler [5aa5f8fb90cc] ...
	I0327 16:48:13.013865    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa5f8fb90cc"
	I0327 16:48:15.535733    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:48:20.536454    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:48:20.536654    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:48:20.552192    8800 logs.go:276] 1 containers: [a8b68fa373a2]
	I0327 16:48:20.552277    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:48:20.565001    8800 logs.go:276] 1 containers: [b2b36dcbf471]
	I0327 16:48:20.565064    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:48:20.576705    8800 logs.go:276] 4 containers: [d8aeaa1c9b02 a88693a2d6c1 a1738554adca b5329cb28332]
	I0327 16:48:20.576781    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:48:20.587114    8800 logs.go:276] 1 containers: [5aa5f8fb90cc]
	I0327 16:48:20.587189    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:48:20.597982    8800 logs.go:276] 1 containers: [4195d96c1f8a]
	I0327 16:48:20.598056    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:48:20.608667    8800 logs.go:276] 1 containers: [faf151c9cff5]
	I0327 16:48:20.608732    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:48:20.619642    8800 logs.go:276] 0 containers: []
	W0327 16:48:20.619655    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:48:20.619708    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:48:20.634066    8800 logs.go:276] 1 containers: [2c4ccf3e69ae]
	I0327 16:48:20.634082    8800 logs.go:123] Gathering logs for coredns [b5329cb28332] ...
	I0327 16:48:20.634086    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5329cb28332"
	I0327 16:48:20.645823    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:48:20.645832    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:48:20.670799    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:48:20.670806    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:48:20.682596    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:48:20.682607    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:48:20.686919    8800 logs.go:123] Gathering logs for coredns [a88693a2d6c1] ...
	I0327 16:48:20.686926    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88693a2d6c1"
	I0327 16:48:20.698719    8800 logs.go:123] Gathering logs for kube-controller-manager [faf151c9cff5] ...
	I0327 16:48:20.698729    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf151c9cff5"
	I0327 16:48:20.715625    8800 logs.go:123] Gathering logs for kube-scheduler [5aa5f8fb90cc] ...
	I0327 16:48:20.715637    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa5f8fb90cc"
	I0327 16:48:20.730103    8800 logs.go:123] Gathering logs for kube-proxy [4195d96c1f8a] ...
	I0327 16:48:20.730115    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4195d96c1f8a"
	I0327 16:48:20.742036    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:48:20.742050    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:48:20.778293    8800 logs.go:123] Gathering logs for kube-apiserver [a8b68fa373a2] ...
	I0327 16:48:20.778306    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b68fa373a2"
	I0327 16:48:20.792552    8800 logs.go:123] Gathering logs for coredns [d8aeaa1c9b02] ...
	I0327 16:48:20.792564    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8aeaa1c9b02"
	I0327 16:48:20.804371    8800 logs.go:123] Gathering logs for coredns [a1738554adca] ...
	I0327 16:48:20.804387    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1738554adca"
	I0327 16:48:20.817159    8800 logs.go:123] Gathering logs for storage-provisioner [2c4ccf3e69ae] ...
	I0327 16:48:20.817170    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4ccf3e69ae"
	I0327 16:48:20.829078    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:48:20.829089    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:48:20.863566    8800 logs.go:123] Gathering logs for etcd [b2b36dcbf471] ...
	I0327 16:48:20.863576    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2b36dcbf471"
	I0327 16:48:23.379949    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:48:28.382116    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:48:28.382385    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:48:28.403747    8800 logs.go:276] 1 containers: [a8b68fa373a2]
	I0327 16:48:28.403855    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:48:28.419719    8800 logs.go:276] 1 containers: [b2b36dcbf471]
	I0327 16:48:28.419801    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:48:28.433164    8800 logs.go:276] 4 containers: [d8aeaa1c9b02 a88693a2d6c1 a1738554adca b5329cb28332]
	I0327 16:48:28.433234    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:48:28.444523    8800 logs.go:276] 1 containers: [5aa5f8fb90cc]
	I0327 16:48:28.444590    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:48:28.454663    8800 logs.go:276] 1 containers: [4195d96c1f8a]
	I0327 16:48:28.454730    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:48:28.465631    8800 logs.go:276] 1 containers: [faf151c9cff5]
	I0327 16:48:28.465694    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:48:28.475496    8800 logs.go:276] 0 containers: []
	W0327 16:48:28.475507    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:48:28.475571    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:48:28.486089    8800 logs.go:276] 1 containers: [2c4ccf3e69ae]
	I0327 16:48:28.486109    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:48:28.486114    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:48:28.490708    8800 logs.go:123] Gathering logs for etcd [b2b36dcbf471] ...
	I0327 16:48:28.490718    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2b36dcbf471"
	I0327 16:48:28.504803    8800 logs.go:123] Gathering logs for kube-proxy [4195d96c1f8a] ...
	I0327 16:48:28.504813    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4195d96c1f8a"
	I0327 16:48:28.516691    8800 logs.go:123] Gathering logs for kube-apiserver [a8b68fa373a2] ...
	I0327 16:48:28.516703    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b68fa373a2"
	I0327 16:48:28.531078    8800 logs.go:123] Gathering logs for coredns [a1738554adca] ...
	I0327 16:48:28.531090    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1738554adca"
	I0327 16:48:28.548158    8800 logs.go:123] Gathering logs for coredns [b5329cb28332] ...
	I0327 16:48:28.548170    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5329cb28332"
	I0327 16:48:28.560614    8800 logs.go:123] Gathering logs for kube-controller-manager [faf151c9cff5] ...
	I0327 16:48:28.560624    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf151c9cff5"
	I0327 16:48:28.578254    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:48:28.578263    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:48:28.616580    8800 logs.go:123] Gathering logs for coredns [d8aeaa1c9b02] ...
	I0327 16:48:28.616591    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8aeaa1c9b02"
	I0327 16:48:28.628342    8800 logs.go:123] Gathering logs for storage-provisioner [2c4ccf3e69ae] ...
	I0327 16:48:28.628352    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4ccf3e69ae"
	I0327 16:48:28.639623    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:48:28.639633    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:48:28.664037    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:48:28.664044    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:48:28.697462    8800 logs.go:123] Gathering logs for coredns [a88693a2d6c1] ...
	I0327 16:48:28.697471    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88693a2d6c1"
	I0327 16:48:28.712205    8800 logs.go:123] Gathering logs for kube-scheduler [5aa5f8fb90cc] ...
	I0327 16:48:28.712217    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa5f8fb90cc"
	I0327 16:48:28.726971    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:48:28.726982    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:48:31.244164    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:48:36.244365    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:48:36.244528    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:48:36.255512    8800 logs.go:276] 1 containers: [a8b68fa373a2]
	I0327 16:48:36.255586    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:48:36.265695    8800 logs.go:276] 1 containers: [b2b36dcbf471]
	I0327 16:48:36.265758    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:48:36.276173    8800 logs.go:276] 4 containers: [d8aeaa1c9b02 a88693a2d6c1 a1738554adca b5329cb28332]
	I0327 16:48:36.276247    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:48:36.286852    8800 logs.go:276] 1 containers: [5aa5f8fb90cc]
	I0327 16:48:36.286925    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:48:36.297266    8800 logs.go:276] 1 containers: [4195d96c1f8a]
	I0327 16:48:36.297330    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:48:36.307767    8800 logs.go:276] 1 containers: [faf151c9cff5]
	I0327 16:48:36.307833    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:48:36.317659    8800 logs.go:276] 0 containers: []
	W0327 16:48:36.317671    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:48:36.317729    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:48:36.329049    8800 logs.go:276] 1 containers: [2c4ccf3e69ae]
	I0327 16:48:36.329065    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:48:36.329069    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:48:36.354437    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:48:36.354446    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:48:36.358592    8800 logs.go:123] Gathering logs for coredns [a1738554adca] ...
	I0327 16:48:36.358598    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1738554adca"
	I0327 16:48:36.373787    8800 logs.go:123] Gathering logs for kube-apiserver [a8b68fa373a2] ...
	I0327 16:48:36.373798    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b68fa373a2"
	I0327 16:48:36.388835    8800 logs.go:123] Gathering logs for coredns [d8aeaa1c9b02] ...
	I0327 16:48:36.388845    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8aeaa1c9b02"
	I0327 16:48:36.404512    8800 logs.go:123] Gathering logs for kube-proxy [4195d96c1f8a] ...
	I0327 16:48:36.404523    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4195d96c1f8a"
	I0327 16:48:36.415938    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:48:36.415947    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:48:36.427390    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:48:36.427402    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:48:36.463936    8800 logs.go:123] Gathering logs for etcd [b2b36dcbf471] ...
	I0327 16:48:36.463948    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2b36dcbf471"
	I0327 16:48:36.478118    8800 logs.go:123] Gathering logs for coredns [a88693a2d6c1] ...
	I0327 16:48:36.478129    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88693a2d6c1"
	I0327 16:48:36.490107    8800 logs.go:123] Gathering logs for coredns [b5329cb28332] ...
	I0327 16:48:36.490118    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5329cb28332"
	I0327 16:48:36.503088    8800 logs.go:123] Gathering logs for kube-scheduler [5aa5f8fb90cc] ...
	I0327 16:48:36.503098    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa5f8fb90cc"
	I0327 16:48:36.516915    8800 logs.go:123] Gathering logs for kube-controller-manager [faf151c9cff5] ...
	I0327 16:48:36.516929    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf151c9cff5"
	I0327 16:48:36.534060    8800 logs.go:123] Gathering logs for storage-provisioner [2c4ccf3e69ae] ...
	I0327 16:48:36.534070    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4ccf3e69ae"
	I0327 16:48:36.549122    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:48:36.549133    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:48:39.085935    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:48:44.088066    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:48:44.088295    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:48:44.111659    8800 logs.go:276] 1 containers: [a8b68fa373a2]
	I0327 16:48:44.111774    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:48:44.127332    8800 logs.go:276] 1 containers: [b2b36dcbf471]
	I0327 16:48:44.127412    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:48:44.144177    8800 logs.go:276] 4 containers: [d8aeaa1c9b02 a88693a2d6c1 a1738554adca b5329cb28332]
	I0327 16:48:44.144247    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:48:44.154979    8800 logs.go:276] 1 containers: [5aa5f8fb90cc]
	I0327 16:48:44.155045    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:48:44.165188    8800 logs.go:276] 1 containers: [4195d96c1f8a]
	I0327 16:48:44.165248    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:48:44.175923    8800 logs.go:276] 1 containers: [faf151c9cff5]
	I0327 16:48:44.176000    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:48:44.185854    8800 logs.go:276] 0 containers: []
	W0327 16:48:44.185868    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:48:44.185926    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:48:44.195918    8800 logs.go:276] 1 containers: [2c4ccf3e69ae]
	I0327 16:48:44.195936    8800 logs.go:123] Gathering logs for etcd [b2b36dcbf471] ...
	I0327 16:48:44.195941    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2b36dcbf471"
	I0327 16:48:44.209778    8800 logs.go:123] Gathering logs for coredns [d8aeaa1c9b02] ...
	I0327 16:48:44.209790    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8aeaa1c9b02"
	I0327 16:48:44.234104    8800 logs.go:123] Gathering logs for coredns [a88693a2d6c1] ...
	I0327 16:48:44.234117    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88693a2d6c1"
	I0327 16:48:44.265293    8800 logs.go:123] Gathering logs for kube-scheduler [5aa5f8fb90cc] ...
	I0327 16:48:44.265307    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa5f8fb90cc"
	I0327 16:48:44.279362    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:48:44.279371    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:48:44.311963    8800 logs.go:123] Gathering logs for storage-provisioner [2c4ccf3e69ae] ...
	I0327 16:48:44.311971    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4ccf3e69ae"
	I0327 16:48:44.324384    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:48:44.324394    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:48:44.349681    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:48:44.349688    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:48:44.361292    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:48:44.361302    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:48:44.366055    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:48:44.366061    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:48:44.401917    8800 logs.go:123] Gathering logs for coredns [a1738554adca] ...
	I0327 16:48:44.401928    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1738554adca"
	I0327 16:48:44.414061    8800 logs.go:123] Gathering logs for kube-controller-manager [faf151c9cff5] ...
	I0327 16:48:44.414071    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf151c9cff5"
	I0327 16:48:44.431301    8800 logs.go:123] Gathering logs for kube-apiserver [a8b68fa373a2] ...
	I0327 16:48:44.431311    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b68fa373a2"
	I0327 16:48:44.445351    8800 logs.go:123] Gathering logs for coredns [b5329cb28332] ...
	I0327 16:48:44.445362    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5329cb28332"
	I0327 16:48:44.461502    8800 logs.go:123] Gathering logs for kube-proxy [4195d96c1f8a] ...
	I0327 16:48:44.461513    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4195d96c1f8a"
	I0327 16:48:46.975416    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:48:51.978009    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:48:51.978416    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:48:52.017373    8800 logs.go:276] 1 containers: [a8b68fa373a2]
	I0327 16:48:52.017510    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:48:52.038741    8800 logs.go:276] 1 containers: [b2b36dcbf471]
	I0327 16:48:52.038841    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:48:52.054039    8800 logs.go:276] 4 containers: [d8aeaa1c9b02 a88693a2d6c1 a1738554adca b5329cb28332]
	I0327 16:48:52.054128    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:48:52.066402    8800 logs.go:276] 1 containers: [5aa5f8fb90cc]
	I0327 16:48:52.066469    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:48:52.077349    8800 logs.go:276] 1 containers: [4195d96c1f8a]
	I0327 16:48:52.077423    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:48:52.088238    8800 logs.go:276] 1 containers: [faf151c9cff5]
	I0327 16:48:52.088306    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:48:52.099222    8800 logs.go:276] 0 containers: []
	W0327 16:48:52.099233    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:48:52.099289    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:48:52.110214    8800 logs.go:276] 1 containers: [2c4ccf3e69ae]
	I0327 16:48:52.110231    8800 logs.go:123] Gathering logs for coredns [d8aeaa1c9b02] ...
	I0327 16:48:52.110236    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8aeaa1c9b02"
	I0327 16:48:52.122134    8800 logs.go:123] Gathering logs for coredns [a88693a2d6c1] ...
	I0327 16:48:52.122147    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88693a2d6c1"
	I0327 16:48:52.138050    8800 logs.go:123] Gathering logs for kube-proxy [4195d96c1f8a] ...
	I0327 16:48:52.138062    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4195d96c1f8a"
	I0327 16:48:52.150064    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:48:52.150075    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:48:52.174328    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:48:52.174337    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:48:52.185631    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:48:52.185641    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:48:52.189995    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:48:52.190001    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:48:52.227072    8800 logs.go:123] Gathering logs for kube-apiserver [a8b68fa373a2] ...
	I0327 16:48:52.227085    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b68fa373a2"
	I0327 16:48:52.241772    8800 logs.go:123] Gathering logs for coredns [a1738554adca] ...
	I0327 16:48:52.241785    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1738554adca"
	I0327 16:48:52.254746    8800 logs.go:123] Gathering logs for storage-provisioner [2c4ccf3e69ae] ...
	I0327 16:48:52.254758    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4ccf3e69ae"
	I0327 16:48:52.271242    8800 logs.go:123] Gathering logs for coredns [b5329cb28332] ...
	I0327 16:48:52.271252    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5329cb28332"
	I0327 16:48:52.283165    8800 logs.go:123] Gathering logs for kube-scheduler [5aa5f8fb90cc] ...
	I0327 16:48:52.283178    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa5f8fb90cc"
	I0327 16:48:52.305499    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:48:52.305510    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:48:52.339478    8800 logs.go:123] Gathering logs for etcd [b2b36dcbf471] ...
	I0327 16:48:52.339490    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2b36dcbf471"
	I0327 16:48:52.354092    8800 logs.go:123] Gathering logs for kube-controller-manager [faf151c9cff5] ...
	I0327 16:48:52.354104    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf151c9cff5"
	I0327 16:48:54.874182    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:48:59.876352    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:48:59.876474    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:48:59.887623    8800 logs.go:276] 1 containers: [a8b68fa373a2]
	I0327 16:48:59.887729    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:48:59.899462    8800 logs.go:276] 1 containers: [b2b36dcbf471]
	I0327 16:48:59.899531    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:48:59.911024    8800 logs.go:276] 4 containers: [d8aeaa1c9b02 a88693a2d6c1 a1738554adca b5329cb28332]
	I0327 16:48:59.911104    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:48:59.922848    8800 logs.go:276] 1 containers: [5aa5f8fb90cc]
	I0327 16:48:59.922921    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:48:59.935747    8800 logs.go:276] 1 containers: [4195d96c1f8a]
	I0327 16:48:59.935818    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:48:59.947475    8800 logs.go:276] 1 containers: [faf151c9cff5]
	I0327 16:48:59.947549    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:48:59.958186    8800 logs.go:276] 0 containers: []
	W0327 16:48:59.958197    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:48:59.958257    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:48:59.969427    8800 logs.go:276] 1 containers: [2c4ccf3e69ae]
	I0327 16:48:59.969443    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:48:59.969448    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:49:00.007690    8800 logs.go:123] Gathering logs for kube-apiserver [a8b68fa373a2] ...
	I0327 16:49:00.007703    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b68fa373a2"
	I0327 16:49:00.023118    8800 logs.go:123] Gathering logs for coredns [b5329cb28332] ...
	I0327 16:49:00.023131    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5329cb28332"
	I0327 16:49:00.036582    8800 logs.go:123] Gathering logs for kube-scheduler [5aa5f8fb90cc] ...
	I0327 16:49:00.036592    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa5f8fb90cc"
	I0327 16:49:00.052259    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:49:00.052270    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:49:00.079073    8800 logs.go:123] Gathering logs for coredns [d8aeaa1c9b02] ...
	I0327 16:49:00.079086    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8aeaa1c9b02"
	I0327 16:49:00.091170    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:49:00.091185    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:49:00.105764    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:49:00.105781    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:49:00.110403    8800 logs.go:123] Gathering logs for etcd [b2b36dcbf471] ...
	I0327 16:49:00.110415    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2b36dcbf471"
	I0327 16:49:00.124958    8800 logs.go:123] Gathering logs for coredns [a1738554adca] ...
	I0327 16:49:00.124971    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1738554adca"
	I0327 16:49:00.138855    8800 logs.go:123] Gathering logs for kube-controller-manager [faf151c9cff5] ...
	I0327 16:49:00.138867    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf151c9cff5"
	I0327 16:49:00.157792    8800 logs.go:123] Gathering logs for storage-provisioner [2c4ccf3e69ae] ...
	I0327 16:49:00.157806    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4ccf3e69ae"
	I0327 16:49:00.171031    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:49:00.171044    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:49:00.208581    8800 logs.go:123] Gathering logs for coredns [a88693a2d6c1] ...
	I0327 16:49:00.208603    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88693a2d6c1"
	I0327 16:49:00.221015    8800 logs.go:123] Gathering logs for kube-proxy [4195d96c1f8a] ...
	I0327 16:49:00.221028    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4195d96c1f8a"
	I0327 16:49:02.740420    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:49:07.742527    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:49:07.742716    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:49:07.754864    8800 logs.go:276] 1 containers: [a8b68fa373a2]
	I0327 16:49:07.754945    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:49:07.765513    8800 logs.go:276] 1 containers: [b2b36dcbf471]
	I0327 16:49:07.765584    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:49:07.776297    8800 logs.go:276] 4 containers: [d8aeaa1c9b02 a88693a2d6c1 a1738554adca b5329cb28332]
	I0327 16:49:07.776363    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:49:07.787576    8800 logs.go:276] 1 containers: [5aa5f8fb90cc]
	I0327 16:49:07.787649    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:49:07.798360    8800 logs.go:276] 1 containers: [4195d96c1f8a]
	I0327 16:49:07.798430    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:49:07.809543    8800 logs.go:276] 1 containers: [faf151c9cff5]
	I0327 16:49:07.809609    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:49:07.820056    8800 logs.go:276] 0 containers: []
	W0327 16:49:07.820069    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:49:07.820125    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:49:07.830796    8800 logs.go:276] 1 containers: [2c4ccf3e69ae]
	I0327 16:49:07.830814    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:49:07.830829    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:49:07.835502    8800 logs.go:123] Gathering logs for coredns [d8aeaa1c9b02] ...
	I0327 16:49:07.835511    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8aeaa1c9b02"
	I0327 16:49:07.847967    8800 logs.go:123] Gathering logs for kube-scheduler [5aa5f8fb90cc] ...
	I0327 16:49:07.847978    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa5f8fb90cc"
	I0327 16:49:07.862640    8800 logs.go:123] Gathering logs for kube-proxy [4195d96c1f8a] ...
	I0327 16:49:07.862653    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4195d96c1f8a"
	I0327 16:49:07.874409    8800 logs.go:123] Gathering logs for storage-provisioner [2c4ccf3e69ae] ...
	I0327 16:49:07.874419    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4ccf3e69ae"
	I0327 16:49:07.885678    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:49:07.885688    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:49:07.909204    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:49:07.909214    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:49:07.920924    8800 logs.go:123] Gathering logs for etcd [b2b36dcbf471] ...
	I0327 16:49:07.920934    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2b36dcbf471"
	I0327 16:49:07.934702    8800 logs.go:123] Gathering logs for coredns [b5329cb28332] ...
	I0327 16:49:07.934712    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5329cb28332"
	I0327 16:49:07.946585    8800 logs.go:123] Gathering logs for kube-controller-manager [faf151c9cff5] ...
	I0327 16:49:07.946597    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf151c9cff5"
	I0327 16:49:07.965090    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:49:07.965103    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:49:07.999971    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:49:07.999979    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:49:08.041681    8800 logs.go:123] Gathering logs for kube-apiserver [a8b68fa373a2] ...
	I0327 16:49:08.041692    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b68fa373a2"
	I0327 16:49:08.056571    8800 logs.go:123] Gathering logs for coredns [a88693a2d6c1] ...
	I0327 16:49:08.056582    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88693a2d6c1"
	I0327 16:49:08.068720    8800 logs.go:123] Gathering logs for coredns [a1738554adca] ...
	I0327 16:49:08.068731    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1738554adca"
	I0327 16:49:10.583972    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:49:15.585982    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:49:15.586127    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:49:15.596667    8800 logs.go:276] 1 containers: [a8b68fa373a2]
	I0327 16:49:15.596727    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:49:15.608230    8800 logs.go:276] 1 containers: [b2b36dcbf471]
	I0327 16:49:15.608324    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:49:15.619339    8800 logs.go:276] 4 containers: [d8aeaa1c9b02 a88693a2d6c1 a1738554adca b5329cb28332]
	I0327 16:49:15.619411    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:49:15.630175    8800 logs.go:276] 1 containers: [5aa5f8fb90cc]
	I0327 16:49:15.630256    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:49:15.640859    8800 logs.go:276] 1 containers: [4195d96c1f8a]
	I0327 16:49:15.640928    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:49:15.651496    8800 logs.go:276] 1 containers: [faf151c9cff5]
	I0327 16:49:15.651563    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:49:15.667053    8800 logs.go:276] 0 containers: []
	W0327 16:49:15.667073    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:49:15.667127    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:49:15.677869    8800 logs.go:276] 1 containers: [2c4ccf3e69ae]
	I0327 16:49:15.677885    8800 logs.go:123] Gathering logs for kube-scheduler [5aa5f8fb90cc] ...
	I0327 16:49:15.677889    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa5f8fb90cc"
	I0327 16:49:15.692169    8800 logs.go:123] Gathering logs for kube-controller-manager [faf151c9cff5] ...
	I0327 16:49:15.692180    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf151c9cff5"
	I0327 16:49:15.709690    8800 logs.go:123] Gathering logs for storage-provisioner [2c4ccf3e69ae] ...
	I0327 16:49:15.709705    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4ccf3e69ae"
	I0327 16:49:15.721273    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:49:15.721287    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:49:15.732791    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:49:15.732803    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:49:15.737334    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:49:15.737343    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:49:15.775921    8800 logs.go:123] Gathering logs for coredns [b5329cb28332] ...
	I0327 16:49:15.775930    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5329cb28332"
	I0327 16:49:15.787267    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:49:15.787278    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:49:15.811940    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:49:15.811947    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:49:15.847147    8800 logs.go:123] Gathering logs for coredns [a1738554adca] ...
	I0327 16:49:15.847160    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1738554adca"
	I0327 16:49:15.859348    8800 logs.go:123] Gathering logs for kube-proxy [4195d96c1f8a] ...
	I0327 16:49:15.859360    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4195d96c1f8a"
	I0327 16:49:15.877601    8800 logs.go:123] Gathering logs for kube-apiserver [a8b68fa373a2] ...
	I0327 16:49:15.877612    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b68fa373a2"
	I0327 16:49:15.896309    8800 logs.go:123] Gathering logs for etcd [b2b36dcbf471] ...
	I0327 16:49:15.896319    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2b36dcbf471"
	I0327 16:49:15.910327    8800 logs.go:123] Gathering logs for coredns [d8aeaa1c9b02] ...
	I0327 16:49:15.910336    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8aeaa1c9b02"
	I0327 16:49:15.922035    8800 logs.go:123] Gathering logs for coredns [a88693a2d6c1] ...
	I0327 16:49:15.922044    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88693a2d6c1"
	I0327 16:49:18.435012    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:49:23.435242    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:49:23.435438    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:49:23.465132    8800 logs.go:276] 1 containers: [a8b68fa373a2]
	I0327 16:49:23.465215    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:49:23.483210    8800 logs.go:276] 1 containers: [b2b36dcbf471]
	I0327 16:49:23.483314    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:49:23.498849    8800 logs.go:276] 4 containers: [d8aeaa1c9b02 a88693a2d6c1 a1738554adca b5329cb28332]
	I0327 16:49:23.498924    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:49:23.510354    8800 logs.go:276] 1 containers: [5aa5f8fb90cc]
	I0327 16:49:23.510429    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:49:23.521069    8800 logs.go:276] 1 containers: [4195d96c1f8a]
	I0327 16:49:23.521139    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:49:23.531400    8800 logs.go:276] 1 containers: [faf151c9cff5]
	I0327 16:49:23.531476    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:49:23.541899    8800 logs.go:276] 0 containers: []
	W0327 16:49:23.541912    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:49:23.541974    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:49:23.552475    8800 logs.go:276] 1 containers: [2c4ccf3e69ae]
	I0327 16:49:23.552494    8800 logs.go:123] Gathering logs for coredns [a1738554adca] ...
	I0327 16:49:23.552499    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1738554adca"
	I0327 16:49:23.564529    8800 logs.go:123] Gathering logs for kube-scheduler [5aa5f8fb90cc] ...
	I0327 16:49:23.564539    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa5f8fb90cc"
	I0327 16:49:23.579119    8800 logs.go:123] Gathering logs for kube-proxy [4195d96c1f8a] ...
	I0327 16:49:23.579129    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4195d96c1f8a"
	I0327 16:49:23.591520    8800 logs.go:123] Gathering logs for storage-provisioner [2c4ccf3e69ae] ...
	I0327 16:49:23.591531    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4ccf3e69ae"
	I0327 16:49:23.603072    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:49:23.603083    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:49:23.614520    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:49:23.614533    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:49:23.651130    8800 logs.go:123] Gathering logs for coredns [d8aeaa1c9b02] ...
	I0327 16:49:23.651144    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8aeaa1c9b02"
	I0327 16:49:23.663915    8800 logs.go:123] Gathering logs for coredns [a88693a2d6c1] ...
	I0327 16:49:23.663926    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88693a2d6c1"
	I0327 16:49:23.676067    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:49:23.676078    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:49:23.700737    8800 logs.go:123] Gathering logs for coredns [b5329cb28332] ...
	I0327 16:49:23.700746    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5329cb28332"
	I0327 16:49:23.712833    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:49:23.712842    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:49:23.745823    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:49:23.745834    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:49:23.749923    8800 logs.go:123] Gathering logs for kube-apiserver [a8b68fa373a2] ...
	I0327 16:49:23.749931    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b68fa373a2"
	I0327 16:49:23.765544    8800 logs.go:123] Gathering logs for etcd [b2b36dcbf471] ...
	I0327 16:49:23.765558    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2b36dcbf471"
	I0327 16:49:23.779936    8800 logs.go:123] Gathering logs for kube-controller-manager [faf151c9cff5] ...
	I0327 16:49:23.779946    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf151c9cff5"
	I0327 16:49:26.300121    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:49:31.302259    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:49:31.302417    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:49:31.316521    8800 logs.go:276] 1 containers: [a8b68fa373a2]
	I0327 16:49:31.316601    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:49:31.327760    8800 logs.go:276] 1 containers: [b2b36dcbf471]
	I0327 16:49:31.327828    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:49:31.338235    8800 logs.go:276] 4 containers: [d8aeaa1c9b02 a88693a2d6c1 a1738554adca b5329cb28332]
	I0327 16:49:31.338312    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:49:31.348515    8800 logs.go:276] 1 containers: [5aa5f8fb90cc]
	I0327 16:49:31.348578    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:49:31.358864    8800 logs.go:276] 1 containers: [4195d96c1f8a]
	I0327 16:49:31.358932    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:49:31.369253    8800 logs.go:276] 1 containers: [faf151c9cff5]
	I0327 16:49:31.369315    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:49:31.381215    8800 logs.go:276] 0 containers: []
	W0327 16:49:31.381225    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:49:31.381283    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:49:31.391885    8800 logs.go:276] 1 containers: [2c4ccf3e69ae]
	I0327 16:49:31.391901    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:49:31.391905    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:49:31.415482    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:49:31.415490    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:49:31.449781    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:49:31.449794    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:49:31.454187    8800 logs.go:123] Gathering logs for etcd [b2b36dcbf471] ...
	I0327 16:49:31.454196    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2b36dcbf471"
	I0327 16:49:31.479781    8800 logs.go:123] Gathering logs for coredns [b5329cb28332] ...
	I0327 16:49:31.479792    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5329cb28332"
	I0327 16:49:31.491799    8800 logs.go:123] Gathering logs for kube-scheduler [5aa5f8fb90cc] ...
	I0327 16:49:31.491813    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa5f8fb90cc"
	I0327 16:49:31.506689    8800 logs.go:123] Gathering logs for kube-controller-manager [faf151c9cff5] ...
	I0327 16:49:31.506700    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf151c9cff5"
	I0327 16:49:31.524169    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:49:31.524178    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:49:31.535907    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:49:31.535917    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:49:31.571579    8800 logs.go:123] Gathering logs for coredns [d8aeaa1c9b02] ...
	I0327 16:49:31.571591    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8aeaa1c9b02"
	I0327 16:49:31.583994    8800 logs.go:123] Gathering logs for coredns [a1738554adca] ...
	I0327 16:49:31.584007    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1738554adca"
	I0327 16:49:31.595626    8800 logs.go:123] Gathering logs for kube-apiserver [a8b68fa373a2] ...
	I0327 16:49:31.595635    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b68fa373a2"
	I0327 16:49:31.613240    8800 logs.go:123] Gathering logs for coredns [a88693a2d6c1] ...
	I0327 16:49:31.613249    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88693a2d6c1"
	I0327 16:49:31.624915    8800 logs.go:123] Gathering logs for kube-proxy [4195d96c1f8a] ...
	I0327 16:49:31.624931    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4195d96c1f8a"
	I0327 16:49:31.638423    8800 logs.go:123] Gathering logs for storage-provisioner [2c4ccf3e69ae] ...
	I0327 16:49:31.638432    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4ccf3e69ae"
	I0327 16:49:34.157925    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:49:39.160102    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:49:39.160290    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:49:39.190686    8800 logs.go:276] 1 containers: [a8b68fa373a2]
	I0327 16:49:39.190777    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:49:39.205652    8800 logs.go:276] 1 containers: [b2b36dcbf471]
	I0327 16:49:39.205735    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:49:39.217556    8800 logs.go:276] 4 containers: [d8aeaa1c9b02 a88693a2d6c1 a1738554adca b5329cb28332]
	I0327 16:49:39.217629    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:49:39.227953    8800 logs.go:276] 1 containers: [5aa5f8fb90cc]
	I0327 16:49:39.228019    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:49:39.238680    8800 logs.go:276] 1 containers: [4195d96c1f8a]
	I0327 16:49:39.238748    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:49:39.249651    8800 logs.go:276] 1 containers: [faf151c9cff5]
	I0327 16:49:39.249718    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:49:39.263281    8800 logs.go:276] 0 containers: []
	W0327 16:49:39.263292    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:49:39.263348    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:49:39.273654    8800 logs.go:276] 1 containers: [2c4ccf3e69ae]
	I0327 16:49:39.273671    8800 logs.go:123] Gathering logs for storage-provisioner [2c4ccf3e69ae] ...
	I0327 16:49:39.273675    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4ccf3e69ae"
	I0327 16:49:39.285000    8800 logs.go:123] Gathering logs for kube-apiserver [a8b68fa373a2] ...
	I0327 16:49:39.285010    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b68fa373a2"
	I0327 16:49:39.299826    8800 logs.go:123] Gathering logs for coredns [d8aeaa1c9b02] ...
	I0327 16:49:39.299835    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8aeaa1c9b02"
	I0327 16:49:39.311284    8800 logs.go:123] Gathering logs for kube-proxy [4195d96c1f8a] ...
	I0327 16:49:39.311296    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4195d96c1f8a"
	I0327 16:49:39.323897    8800 logs.go:123] Gathering logs for kube-controller-manager [faf151c9cff5] ...
	I0327 16:49:39.323907    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf151c9cff5"
	I0327 16:49:39.341379    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:49:39.341389    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:49:39.346009    8800 logs.go:123] Gathering logs for coredns [b5329cb28332] ...
	I0327 16:49:39.346015    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5329cb28332"
	I0327 16:49:39.357532    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:49:39.357543    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:49:39.379898    8800 logs.go:123] Gathering logs for etcd [b2b36dcbf471] ...
	I0327 16:49:39.379905    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2b36dcbf471"
	I0327 16:49:39.394216    8800 logs.go:123] Gathering logs for coredns [a1738554adca] ...
	I0327 16:49:39.394227    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1738554adca"
	I0327 16:49:39.406666    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:49:39.406680    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:49:39.418866    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:49:39.418876    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:49:39.453825    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:49:39.453833    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:49:39.487483    8800 logs.go:123] Gathering logs for coredns [a88693a2d6c1] ...
	I0327 16:49:39.487493    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88693a2d6c1"
	I0327 16:49:39.508056    8800 logs.go:123] Gathering logs for kube-scheduler [5aa5f8fb90cc] ...
	I0327 16:49:39.508068    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa5f8fb90cc"
	I0327 16:49:42.031551    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:49:47.031898    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:49:47.036626    8800 out.go:177] 
	W0327 16:49:47.039662    8800 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0327 16:49:47.039675    8800 out.go:239] * 
	* 
	W0327 16:49:47.040553    8800 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 16:49:47.052527    8800 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-400000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-03-27 16:49:47.149199 -0700 PDT m=+1408.660740918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-400000 -n running-upgrade-400000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-400000 -n running-upgrade-400000: exit status 2 (15.615117375s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-400000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   |    Version     |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	| start   | -p force-systemd-flag-460000          | force-systemd-flag-460000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:39 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |                |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| ssh     | force-systemd-env-565000              | force-systemd-env-565000  | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:39 PDT |                     |
	|         | ssh docker info --format              |                           |         |                |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |                |                     |                     |
	| delete  | -p force-systemd-env-565000           | force-systemd-env-565000  | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:39 PDT | 27 Mar 24 16:39 PDT |
	| start   | -p docker-flags-564000                | docker-flags-564000       | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:39 PDT |                     |
	|         | --cache-images=false                  |                           |         |                |                     |                     |
	|         | --memory=2048                         |                           |         |                |                     |                     |
	|         | --install-addons=false                |                           |         |                |                     |                     |
	|         | --wait=false                          |                           |         |                |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |                |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |                |                     |                     |
	|         | --docker-opt=debug                    |                           |         |                |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |                |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| ssh     | force-systemd-flag-460000             | force-systemd-flag-460000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:39 PDT |                     |
	|         | ssh docker info --format              |                           |         |                |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |                |                     |                     |
	| delete  | -p force-systemd-flag-460000          | force-systemd-flag-460000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:39 PDT | 27 Mar 24 16:39 PDT |
	| start   | -p cert-expiration-052000             | cert-expiration-052000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:39 PDT |                     |
	|         | --memory=2048                         |                           |         |                |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| ssh     | docker-flags-564000 ssh               | docker-flags-564000       | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:39 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |                |                     |                     |
	|         | --property=Environment                |                           |         |                |                     |                     |
	|         | --no-pager                            |                           |         |                |                     |                     |
	| ssh     | docker-flags-564000 ssh               | docker-flags-564000       | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:39 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |                |                     |                     |
	|         | --property=ExecStart                  |                           |         |                |                     |                     |
	|         | --no-pager                            |                           |         |                |                     |                     |
	| delete  | -p docker-flags-564000                | docker-flags-564000       | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:39 PDT | 27 Mar 24 16:39 PDT |
	| start   | -p cert-options-772000                | cert-options-772000       | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:39 PDT |                     |
	|         | --memory=2048                         |                           |         |                |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |                |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |                |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |                |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |                |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| ssh     | cert-options-772000 ssh               | cert-options-772000       | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:39 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |                |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |                |                     |                     |
	| ssh     | -p cert-options-772000 -- sudo        | cert-options-772000       | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:39 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |                |                     |                     |
	| delete  | -p cert-options-772000                | cert-options-772000       | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:39 PDT | 27 Mar 24 16:39 PDT |
	| start   | -p running-upgrade-400000             | minikube                  | jenkins | v1.26.0        | 27 Mar 24 16:40 PDT | 27 Mar 24 16:41 PDT |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |                |                     |                     |
	| start   | -p running-upgrade-400000             | running-upgrade-400000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:41 PDT |                     |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| start   | -p cert-expiration-052000             | cert-expiration-052000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:42 PDT |                     |
	|         | --memory=2048                         |                           |         |                |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| delete  | -p cert-expiration-052000             | cert-expiration-052000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:42 PDT | 27 Mar 24 16:42 PDT |
	| start   | -p kubernetes-upgrade-236000          | kubernetes-upgrade-236000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:42 PDT |                     |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| stop    | -p kubernetes-upgrade-236000          | kubernetes-upgrade-236000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:43 PDT | 27 Mar 24 16:43 PDT |
	| start   | -p kubernetes-upgrade-236000          | kubernetes-upgrade-236000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:43 PDT |                     |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0   |                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	| delete  | -p kubernetes-upgrade-236000          | kubernetes-upgrade-236000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:43 PDT | 27 Mar 24 16:43 PDT |
	| start   | -p stopped-upgrade-017000             | minikube                  | jenkins | v1.26.0        | 27 Mar 24 16:43 PDT | 27 Mar 24 16:44 PDT |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |                |                     |                     |
	| stop    | stopped-upgrade-017000 stop           | minikube                  | jenkins | v1.26.0        | 27 Mar 24 16:44 PDT | 27 Mar 24 16:44 PDT |
	| start   | -p stopped-upgrade-017000             | stopped-upgrade-017000    | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:44 PDT |                     |
	|         | --memory=2200                         |                           |         |                |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |                |                     |                     |
	|         | --driver=qemu2                        |                           |         |                |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 16:44:18
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 16:44:18.451832    8959 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:44:18.452013    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:44:18.452017    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:44:18.452020    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:44:18.452176    8959 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:44:18.453332    8959 out.go:298] Setting JSON to false
	I0327 16:44:18.471952    8959 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6229,"bootTime":1711576829,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 16:44:18.472038    8959 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 16:44:18.477052    8959 out.go:177] * [stopped-upgrade-017000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 16:44:18.485086    8959 out.go:177]   - MINIKUBE_LOCATION=18485
	I0327 16:44:18.489154    8959 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:44:18.485132    8959 notify.go:220] Checking for updates...
	I0327 16:44:18.494977    8959 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 16:44:18.498085    8959 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 16:44:18.499494    8959 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	I0327 16:44:18.503010    8959 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 16:44:18.506321    8959 config.go:182] Loaded profile config "stopped-upgrade-017000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 16:44:18.510052    8959 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0327 16:44:18.513013    8959 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 16:44:18.517063    8959 out.go:177] * Using the qemu2 driver based on existing profile
	I0327 16:44:18.523976    8959 start.go:297] selected driver: qemu2
	I0327 16:44:18.523983    8959 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-017000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51421 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-017000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0327 16:44:18.524033    8959 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 16:44:18.526734    8959 cni.go:84] Creating CNI manager for ""
	I0327 16:44:18.526756    8959 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 16:44:18.526788    8959 start.go:340] cluster config:
	{Name:stopped-upgrade-017000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51421 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-017000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0327 16:44:18.526856    8959 iso.go:125] acquiring lock: {Name:mk26a7f41e004242919e6d5076ec0f9645f820ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:44:18.535016    8959 out.go:177] * Starting "stopped-upgrade-017000" primary control-plane node in "stopped-upgrade-017000" cluster
	I0327 16:44:18.539024    8959 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0327 16:44:18.539040    8959 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0327 16:44:18.539053    8959 cache.go:56] Caching tarball of preloaded images
	I0327 16:44:18.539107    8959 preload.go:173] Found /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 16:44:18.539115    8959 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0327 16:44:18.539175    8959 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/stopped-upgrade-017000/config.json ...
	I0327 16:44:18.539744    8959 start.go:360] acquireMachinesLock for stopped-upgrade-017000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:44:18.539775    8959 start.go:364] duration metric: took 23.584µs to acquireMachinesLock for "stopped-upgrade-017000"
	I0327 16:44:18.539787    8959 start.go:96] Skipping create...Using existing machine configuration
	I0327 16:44:18.539793    8959 fix.go:54] fixHost starting: 
	I0327 16:44:18.539913    8959 fix.go:112] recreateIfNeeded on stopped-upgrade-017000: state=Stopped err=<nil>
	W0327 16:44:18.539922    8959 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 16:44:18.548044    8959 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-017000" ...
	I0327 16:44:19.554916    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:44:19.555485    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:44:19.605203    8800 logs.go:276] 2 containers: [0703d422a652 126b9553cb40]
	I0327 16:44:19.605361    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:44:19.623959    8800 logs.go:276] 2 containers: [4f67bfb30b49 5429db75cd75]
	I0327 16:44:19.624057    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:44:19.638182    8800 logs.go:276] 1 containers: [2ef14eacffaa]
	I0327 16:44:19.638259    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:44:19.650008    8800 logs.go:276] 2 containers: [b68fec8d8767 c0c992e34b5d]
	I0327 16:44:19.650077    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:44:19.665112    8800 logs.go:276] 1 containers: [2bfb39e97e1d]
	I0327 16:44:19.665183    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:44:19.676415    8800 logs.go:276] 2 containers: [385805912c59 1a8c10b8da56]
	I0327 16:44:19.676484    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:44:19.687144    8800 logs.go:276] 0 containers: []
	W0327 16:44:19.687157    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:44:19.687218    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:44:19.698038    8800 logs.go:276] 2 containers: [b4f5d0ba182b 579fe35ef076]
	I0327 16:44:19.698058    8800 logs.go:123] Gathering logs for coredns [2ef14eacffaa] ...
	I0327 16:44:19.698063    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef14eacffaa"
	I0327 16:44:19.709614    8800 logs.go:123] Gathering logs for kube-proxy [2bfb39e97e1d] ...
	I0327 16:44:19.709624    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bfb39e97e1d"
	I0327 16:44:19.721213    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:44:19.721224    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:44:19.746646    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:44:19.746655    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:44:19.760855    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:44:19.760871    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:44:19.797743    8800 logs.go:123] Gathering logs for kube-apiserver [0703d422a652] ...
	I0327 16:44:19.797752    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0703d422a652"
	I0327 16:44:19.815058    8800 logs.go:123] Gathering logs for kube-apiserver [126b9553cb40] ...
	I0327 16:44:19.815070    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126b9553cb40"
	I0327 16:44:19.834438    8800 logs.go:123] Gathering logs for kube-controller-manager [1a8c10b8da56] ...
	I0327 16:44:19.834447    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a8c10b8da56"
	I0327 16:44:19.849521    8800 logs.go:123] Gathering logs for etcd [5429db75cd75] ...
	I0327 16:44:19.849535    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5429db75cd75"
	I0327 16:44:19.864282    8800 logs.go:123] Gathering logs for kube-scheduler [b68fec8d8767] ...
	I0327 16:44:19.864292    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68fec8d8767"
	I0327 16:44:19.875599    8800 logs.go:123] Gathering logs for kube-scheduler [c0c992e34b5d] ...
	I0327 16:44:19.875608    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0c992e34b5d"
	I0327 16:44:19.890784    8800 logs.go:123] Gathering logs for kube-controller-manager [385805912c59] ...
	I0327 16:44:19.890794    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385805912c59"
	I0327 16:44:19.908362    8800 logs.go:123] Gathering logs for storage-provisioner [b4f5d0ba182b] ...
	I0327 16:44:19.908375    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f5d0ba182b"
	I0327 16:44:19.920322    8800 logs.go:123] Gathering logs for storage-provisioner [579fe35ef076] ...
	I0327 16:44:19.920336    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 579fe35ef076"
	I0327 16:44:19.936285    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:44:19.936297    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:44:19.940954    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:44:19.940962    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:44:19.984218    8800 logs.go:123] Gathering logs for etcd [4f67bfb30b49] ...
	I0327 16:44:19.984231    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f67bfb30b49"
	I0327 16:44:22.499684    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:44:18.552079    8959 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/stopped-upgrade-017000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/stopped-upgrade-017000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/stopped-upgrade-017000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51386-:22,hostfwd=tcp::51387-:2376,hostname=stopped-upgrade-017000 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/stopped-upgrade-017000/disk.qcow2
	I0327 16:44:18.600996    8959 main.go:141] libmachine: STDOUT: 
	I0327 16:44:18.601025    8959 main.go:141] libmachine: STDERR: 
	I0327 16:44:18.601032    8959 main.go:141] libmachine: Waiting for VM to start (ssh -p 51386 docker@127.0.0.1)...
	I0327 16:44:27.502133    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:44:27.502298    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:44:27.513964    8800 logs.go:276] 2 containers: [0703d422a652 126b9553cb40]
	I0327 16:44:27.514035    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:44:27.524620    8800 logs.go:276] 2 containers: [4f67bfb30b49 5429db75cd75]
	I0327 16:44:27.524690    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:44:27.539755    8800 logs.go:276] 1 containers: [2ef14eacffaa]
	I0327 16:44:27.539820    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:44:27.550883    8800 logs.go:276] 2 containers: [b68fec8d8767 c0c992e34b5d]
	I0327 16:44:27.550953    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:44:27.561129    8800 logs.go:276] 1 containers: [2bfb39e97e1d]
	I0327 16:44:27.561199    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:44:27.572050    8800 logs.go:276] 2 containers: [385805912c59 1a8c10b8da56]
	I0327 16:44:27.572120    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:44:27.582595    8800 logs.go:276] 0 containers: []
	W0327 16:44:27.582607    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:44:27.582669    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:44:27.597091    8800 logs.go:276] 2 containers: [b4f5d0ba182b 579fe35ef076]
	I0327 16:44:27.597113    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:44:27.597118    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:44:27.633064    8800 logs.go:123] Gathering logs for kube-apiserver [126b9553cb40] ...
	I0327 16:44:27.633075    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126b9553cb40"
	I0327 16:44:27.652911    8800 logs.go:123] Gathering logs for kube-scheduler [c0c992e34b5d] ...
	I0327 16:44:27.652924    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0c992e34b5d"
	I0327 16:44:27.668205    8800 logs.go:123] Gathering logs for storage-provisioner [579fe35ef076] ...
	I0327 16:44:27.668213    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 579fe35ef076"
	I0327 16:44:27.679896    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:44:27.679907    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:44:27.703730    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:44:27.703738    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:44:27.708312    8800 logs.go:123] Gathering logs for kube-controller-manager [385805912c59] ...
	I0327 16:44:27.708321    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385805912c59"
	I0327 16:44:27.733008    8800 logs.go:123] Gathering logs for kube-controller-manager [1a8c10b8da56] ...
	I0327 16:44:27.733018    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a8c10b8da56"
	I0327 16:44:27.747792    8800 logs.go:123] Gathering logs for storage-provisioner [b4f5d0ba182b] ...
	I0327 16:44:27.747808    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f5d0ba182b"
	I0327 16:44:27.759785    8800 logs.go:123] Gathering logs for kube-apiserver [0703d422a652] ...
	I0327 16:44:27.759795    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0703d422a652"
	I0327 16:44:27.774203    8800 logs.go:123] Gathering logs for kube-scheduler [b68fec8d8767] ...
	I0327 16:44:27.774212    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68fec8d8767"
	I0327 16:44:27.785932    8800 logs.go:123] Gathering logs for kube-proxy [2bfb39e97e1d] ...
	I0327 16:44:27.785942    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bfb39e97e1d"
	I0327 16:44:27.798048    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:44:27.798058    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:44:27.835371    8800 logs.go:123] Gathering logs for etcd [4f67bfb30b49] ...
	I0327 16:44:27.835382    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f67bfb30b49"
	I0327 16:44:27.849634    8800 logs.go:123] Gathering logs for etcd [5429db75cd75] ...
	I0327 16:44:27.849644    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5429db75cd75"
	I0327 16:44:27.864627    8800 logs.go:123] Gathering logs for coredns [2ef14eacffaa] ...
	I0327 16:44:27.864637    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef14eacffaa"
	I0327 16:44:27.878772    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:44:27.878782    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:44:30.392174    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:44:35.394360    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:44:35.394914    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:44:35.441550    8800 logs.go:276] 2 containers: [0703d422a652 126b9553cb40]
	I0327 16:44:35.441669    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:44:35.462659    8800 logs.go:276] 2 containers: [4f67bfb30b49 5429db75cd75]
	I0327 16:44:35.462760    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:44:35.480926    8800 logs.go:276] 1 containers: [2ef14eacffaa]
	I0327 16:44:35.481006    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:44:35.494116    8800 logs.go:276] 2 containers: [b68fec8d8767 c0c992e34b5d]
	I0327 16:44:35.494195    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:44:35.504856    8800 logs.go:276] 1 containers: [2bfb39e97e1d]
	I0327 16:44:35.504927    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:44:35.515769    8800 logs.go:276] 2 containers: [385805912c59 1a8c10b8da56]
	I0327 16:44:35.515835    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:44:35.526402    8800 logs.go:276] 0 containers: []
	W0327 16:44:35.526419    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:44:35.526471    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:44:35.536720    8800 logs.go:276] 2 containers: [b4f5d0ba182b 579fe35ef076]
	I0327 16:44:35.536737    8800 logs.go:123] Gathering logs for kube-controller-manager [1a8c10b8da56] ...
	I0327 16:44:35.536743    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a8c10b8da56"
	I0327 16:44:35.551748    8800 logs.go:123] Gathering logs for kube-scheduler [b68fec8d8767] ...
	I0327 16:44:35.551759    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68fec8d8767"
	I0327 16:44:35.563901    8800 logs.go:123] Gathering logs for kube-proxy [2bfb39e97e1d] ...
	I0327 16:44:35.563913    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bfb39e97e1d"
	I0327 16:44:35.575386    8800 logs.go:123] Gathering logs for kube-apiserver [126b9553cb40] ...
	I0327 16:44:35.575404    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126b9553cb40"
	I0327 16:44:35.594069    8800 logs.go:123] Gathering logs for storage-provisioner [579fe35ef076] ...
	I0327 16:44:35.594078    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 579fe35ef076"
	I0327 16:44:35.605125    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:44:35.605135    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:44:35.641673    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:44:35.641681    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:44:35.645758    8800 logs.go:123] Gathering logs for kube-controller-manager [385805912c59] ...
	I0327 16:44:35.645765    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385805912c59"
	I0327 16:44:35.662942    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:44:35.662953    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:44:35.686707    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:44:35.686714    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:44:35.721057    8800 logs.go:123] Gathering logs for etcd [4f67bfb30b49] ...
	I0327 16:44:35.721070    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f67bfb30b49"
	I0327 16:44:35.735416    8800 logs.go:123] Gathering logs for coredns [2ef14eacffaa] ...
	I0327 16:44:35.735426    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef14eacffaa"
	I0327 16:44:35.746710    8800 logs.go:123] Gathering logs for kube-scheduler [c0c992e34b5d] ...
	I0327 16:44:35.746720    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0c992e34b5d"
	I0327 16:44:35.761850    8800 logs.go:123] Gathering logs for storage-provisioner [b4f5d0ba182b] ...
	I0327 16:44:35.761860    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f5d0ba182b"
	I0327 16:44:35.773199    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:44:35.773212    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:44:35.785135    8800 logs.go:123] Gathering logs for kube-apiserver [0703d422a652] ...
	I0327 16:44:35.785145    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0703d422a652"
	I0327 16:44:35.799791    8800 logs.go:123] Gathering logs for etcd [5429db75cd75] ...
	I0327 16:44:35.799802    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5429db75cd75"
	I0327 16:44:38.833952    8959 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/stopped-upgrade-017000/config.json ...
	I0327 16:44:38.834872    8959 machine.go:94] provisionDockerMachine start ...
	I0327 16:44:38.835096    8959 main.go:141] libmachine: Using SSH client type: native
	I0327 16:44:38.835514    8959 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028a5bf0] 0x1028a8450 <nil>  [] 0s} localhost 51386 <nil> <nil>}
	I0327 16:44:38.835528    8959 main.go:141] libmachine: About to run SSH command:
	hostname
	I0327 16:44:38.919638    8959 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0327 16:44:38.919684    8959 buildroot.go:166] provisioning hostname "stopped-upgrade-017000"
	I0327 16:44:38.919818    8959 main.go:141] libmachine: Using SSH client type: native
	I0327 16:44:38.920051    8959 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028a5bf0] 0x1028a8450 <nil>  [] 0s} localhost 51386 <nil> <nil>}
	I0327 16:44:38.920062    8959 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-017000 && echo "stopped-upgrade-017000" | sudo tee /etc/hostname
	I0327 16:44:38.993932    8959 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-017000
	
	I0327 16:44:38.994000    8959 main.go:141] libmachine: Using SSH client type: native
	I0327 16:44:38.994145    8959 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028a5bf0] 0x1028a8450 <nil>  [] 0s} localhost 51386 <nil> <nil>}
	I0327 16:44:38.994159    8959 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-017000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-017000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-017000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0327 16:44:39.062354    8959 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0327 16:44:39.062366    8959 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18485-6511/.minikube CaCertPath:/Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18485-6511/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18485-6511/.minikube}
	I0327 16:44:39.062393    8959 buildroot.go:174] setting up certificates
	I0327 16:44:39.062401    8959 provision.go:84] configureAuth start
	I0327 16:44:39.062410    8959 provision.go:143] copyHostCerts
	I0327 16:44:39.062491    8959 exec_runner.go:144] found /Users/jenkins/minikube-integration/18485-6511/.minikube/cert.pem, removing ...
	I0327 16:44:39.062499    8959 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18485-6511/.minikube/cert.pem
	I0327 16:44:39.062632    8959 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18485-6511/.minikube/cert.pem (1123 bytes)
	I0327 16:44:39.062871    8959 exec_runner.go:144] found /Users/jenkins/minikube-integration/18485-6511/.minikube/key.pem, removing ...
	I0327 16:44:39.062880    8959 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18485-6511/.minikube/key.pem
	I0327 16:44:39.062983    8959 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18485-6511/.minikube/key.pem (1675 bytes)
	I0327 16:44:39.063141    8959 exec_runner.go:144] found /Users/jenkins/minikube-integration/18485-6511/.minikube/ca.pem, removing ...
	I0327 16:44:39.063147    8959 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18485-6511/.minikube/ca.pem
	I0327 16:44:39.063222    8959 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18485-6511/.minikube/ca.pem (1078 bytes)
	I0327 16:44:39.063343    8959 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-017000 san=[127.0.0.1 localhost minikube stopped-upgrade-017000]
	I0327 16:44:39.333840    8959 provision.go:177] copyRemoteCerts
	I0327 16:44:39.333893    8959 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0327 16:44:39.333901    8959 sshutil.go:53] new ssh client: &{IP:localhost Port:51386 SSHKeyPath:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/stopped-upgrade-017000/id_rsa Username:docker}
	I0327 16:44:39.368601    8959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0327 16:44:39.375203    8959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0327 16:44:39.381979    8959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0327 16:44:39.389185    8959 provision.go:87] duration metric: took 326.785292ms to configureAuth
	I0327 16:44:39.389195    8959 buildroot.go:189] setting minikube options for container-runtime
	I0327 16:44:39.389290    8959 config.go:182] Loaded profile config "stopped-upgrade-017000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 16:44:39.389324    8959 main.go:141] libmachine: Using SSH client type: native
	I0327 16:44:39.389412    8959 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028a5bf0] 0x1028a8450 <nil>  [] 0s} localhost 51386 <nil> <nil>}
	I0327 16:44:39.389417    8959 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0327 16:44:39.447062    8959 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0327 16:44:39.447069    8959 buildroot.go:70] root file system type: tmpfs
	I0327 16:44:39.447132    8959 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0327 16:44:39.447172    8959 main.go:141] libmachine: Using SSH client type: native
	I0327 16:44:39.447267    8959 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028a5bf0] 0x1028a8450 <nil>  [] 0s} localhost 51386 <nil> <nil>}
	I0327 16:44:39.447302    8959 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0327 16:44:39.510334    8959 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0327 16:44:39.510380    8959 main.go:141] libmachine: Using SSH client type: native
	I0327 16:44:39.510478    8959 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028a5bf0] 0x1028a8450 <nil>  [] 0s} localhost 51386 <nil> <nil>}
	I0327 16:44:39.510486    8959 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0327 16:44:39.862254    8959 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0327 16:44:39.862269    8959 machine.go:97] duration metric: took 1.027412291s to provisionDockerMachine
	I0327 16:44:39.862276    8959 start.go:293] postStartSetup for "stopped-upgrade-017000" (driver="qemu2")
	I0327 16:44:39.862283    8959 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0327 16:44:39.862343    8959 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0327 16:44:39.862353    8959 sshutil.go:53] new ssh client: &{IP:localhost Port:51386 SSHKeyPath:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/stopped-upgrade-017000/id_rsa Username:docker}
	I0327 16:44:39.894697    8959 ssh_runner.go:195] Run: cat /etc/os-release
	I0327 16:44:39.896555    8959 info.go:137] Remote host: Buildroot 2021.02.12
	I0327 16:44:39.896563    8959 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18485-6511/.minikube/addons for local assets ...
	I0327 16:44:39.896637    8959 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18485-6511/.minikube/files for local assets ...
	I0327 16:44:39.896749    8959 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18485-6511/.minikube/files/etc/ssl/certs/69262.pem -> 69262.pem in /etc/ssl/certs
	I0327 16:44:39.896873    8959 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0327 16:44:39.899601    8959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/files/etc/ssl/certs/69262.pem --> /etc/ssl/certs/69262.pem (1708 bytes)
	I0327 16:44:39.907524    8959 start.go:296] duration metric: took 45.240625ms for postStartSetup
	I0327 16:44:39.907544    8959 fix.go:56] duration metric: took 21.368390167s for fixHost
	I0327 16:44:39.907613    8959 main.go:141] libmachine: Using SSH client type: native
	I0327 16:44:39.907719    8959 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028a5bf0] 0x1028a8450 <nil>  [] 0s} localhost 51386 <nil> <nil>}
	I0327 16:44:39.907725    8959 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0327 16:44:39.965053    8959 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711583080.295743379
	
	I0327 16:44:39.965064    8959 fix.go:216] guest clock: 1711583080.295743379
	I0327 16:44:39.965068    8959 fix.go:229] Guest: 2024-03-27 16:44:40.295743379 -0700 PDT Remote: 2024-03-27 16:44:39.907546 -0700 PDT m=+21.490555709 (delta=388.197379ms)
	I0327 16:44:39.965081    8959 fix.go:200] guest clock delta is within tolerance: 388.197379ms
	I0327 16:44:39.965083    8959 start.go:83] releasing machines lock for "stopped-upgrade-017000", held for 21.425942916s
	I0327 16:44:39.965148    8959 ssh_runner.go:195] Run: cat /version.json
	I0327 16:44:39.965158    8959 sshutil.go:53] new ssh client: &{IP:localhost Port:51386 SSHKeyPath:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/stopped-upgrade-017000/id_rsa Username:docker}
	I0327 16:44:39.965148    8959 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0327 16:44:39.965192    8959 sshutil.go:53] new ssh client: &{IP:localhost Port:51386 SSHKeyPath:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/stopped-upgrade-017000/id_rsa Username:docker}
	W0327 16:44:39.965746    8959 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51386: connect: connection refused
	I0327 16:44:39.965768    8959 retry.go:31] will retry after 180.184309ms: dial tcp [::1]:51386: connect: connection refused
	W0327 16:44:40.186285    8959 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0327 16:44:40.186384    8959 ssh_runner.go:195] Run: systemctl --version
	I0327 16:44:40.189720    8959 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0327 16:44:40.192394    8959 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0327 16:44:40.192450    8959 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0327 16:44:40.197009    8959 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0327 16:44:40.203606    8959 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0327 16:44:40.203623    8959 start.go:494] detecting cgroup driver to use...
	I0327 16:44:40.203713    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0327 16:44:40.212835    8959 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0327 16:44:40.216677    8959 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0327 16:44:40.220053    8959 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0327 16:44:40.220081    8959 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0327 16:44:40.223092    8959 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0327 16:44:40.225931    8959 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0327 16:44:40.229062    8959 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0327 16:44:40.232241    8959 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0327 16:44:40.235128    8959 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0327 16:44:40.238023    8959 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0327 16:44:40.241464    8959 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0327 16:44:40.244986    8959 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0327 16:44:40.247719    8959 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0327 16:44:40.250222    8959 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 16:44:40.332312    8959 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0327 16:44:40.339420    8959 start.go:494] detecting cgroup driver to use...
	I0327 16:44:40.339498    8959 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0327 16:44:40.344426    8959 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0327 16:44:40.349467    8959 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0327 16:44:40.360574    8959 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0327 16:44:40.365142    8959 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0327 16:44:40.369896    8959 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0327 16:44:40.427776    8959 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0327 16:44:40.432517    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0327 16:44:40.438052    8959 ssh_runner.go:195] Run: which cri-dockerd
	I0327 16:44:40.439356    8959 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0327 16:44:40.442084    8959 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0327 16:44:40.447408    8959 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0327 16:44:40.517399    8959 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0327 16:44:40.583008    8959 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0327 16:44:40.583069    8959 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0327 16:44:40.588469    8959 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 16:44:40.654887    8959 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0327 16:44:41.795098    8959 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.140220167s)
	I0327 16:44:41.795161    8959 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0327 16:44:41.799597    8959 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0327 16:44:41.806031    8959 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0327 16:44:41.811023    8959 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0327 16:44:41.880272    8959 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0327 16:44:41.954137    8959 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 16:44:42.035292    8959 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0327 16:44:42.040829    8959 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0327 16:44:42.045200    8959 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 16:44:42.123882    8959 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0327 16:44:42.162871    8959 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0327 16:44:42.162957    8959 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0327 16:44:42.166584    8959 start.go:562] Will wait 60s for crictl version
	I0327 16:44:42.166638    8959 ssh_runner.go:195] Run: which crictl
	I0327 16:44:42.167879    8959 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0327 16:44:42.182888    8959 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0327 16:44:42.182969    8959 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0327 16:44:42.199767    8959 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0327 16:44:38.321062    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:44:42.220078    8959 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0327 16:44:42.220190    8959 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0327 16:44:42.221432    8959 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0327 16:44:42.225415    8959 kubeadm.go:877] updating cluster {Name:stopped-upgrade-017000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51421 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-017000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0327 16:44:42.225468    8959 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0327 16:44:42.225510    8959 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0327 16:44:42.237521    8959 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0327 16:44:42.237538    8959 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0327 16:44:42.237594    8959 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0327 16:44:42.241353    8959 ssh_runner.go:195] Run: which lz4
	I0327 16:44:42.242704    8959 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0327 16:44:42.243947    8959 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0327 16:44:42.243957    8959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0327 16:44:42.976490    8959 docker.go:649] duration metric: took 733.839208ms to copy over tarball
	I0327 16:44:42.976550    8959 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0327 16:44:43.322682    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:44:43.322775    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:44:43.335628    8800 logs.go:276] 2 containers: [0703d422a652 126b9553cb40]
	I0327 16:44:43.335700    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:44:43.346964    8800 logs.go:276] 2 containers: [4f67bfb30b49 5429db75cd75]
	I0327 16:44:43.347036    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:44:43.358655    8800 logs.go:276] 1 containers: [2ef14eacffaa]
	I0327 16:44:43.358735    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:44:43.370649    8800 logs.go:276] 2 containers: [b68fec8d8767 c0c992e34b5d]
	I0327 16:44:43.370726    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:44:43.383365    8800 logs.go:276] 1 containers: [2bfb39e97e1d]
	I0327 16:44:43.383439    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:44:43.395743    8800 logs.go:276] 2 containers: [385805912c59 1a8c10b8da56]
	I0327 16:44:43.395820    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:44:43.407868    8800 logs.go:276] 0 containers: []
	W0327 16:44:43.407883    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:44:43.407947    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:44:43.419955    8800 logs.go:276] 2 containers: [b4f5d0ba182b 579fe35ef076]
	I0327 16:44:43.419974    8800 logs.go:123] Gathering logs for kube-apiserver [126b9553cb40] ...
	I0327 16:44:43.419980    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126b9553cb40"
	I0327 16:44:43.444495    8800 logs.go:123] Gathering logs for kube-scheduler [c0c992e34b5d] ...
	I0327 16:44:43.444514    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0c992e34b5d"
	I0327 16:44:43.461647    8800 logs.go:123] Gathering logs for storage-provisioner [b4f5d0ba182b] ...
	I0327 16:44:43.461661    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f5d0ba182b"
	I0327 16:44:43.474427    8800 logs.go:123] Gathering logs for kube-apiserver [0703d422a652] ...
	I0327 16:44:43.474441    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0703d422a652"
	I0327 16:44:43.489903    8800 logs.go:123] Gathering logs for kube-controller-manager [1a8c10b8da56] ...
	I0327 16:44:43.489916    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a8c10b8da56"
	I0327 16:44:43.507084    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:44:43.507097    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:44:43.535578    8800 logs.go:123] Gathering logs for storage-provisioner [579fe35ef076] ...
	I0327 16:44:43.535595    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 579fe35ef076"
	I0327 16:44:43.551480    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:44:43.551492    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:44:43.591445    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:44:43.591464    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:44:43.636617    8800 logs.go:123] Gathering logs for etcd [5429db75cd75] ...
	I0327 16:44:43.636631    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5429db75cd75"
	I0327 16:44:43.652877    8800 logs.go:123] Gathering logs for coredns [2ef14eacffaa] ...
	I0327 16:44:43.652888    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef14eacffaa"
	I0327 16:44:43.671032    8800 logs.go:123] Gathering logs for kube-scheduler [b68fec8d8767] ...
	I0327 16:44:43.671044    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68fec8d8767"
	I0327 16:44:43.683772    8800 logs.go:123] Gathering logs for kube-proxy [2bfb39e97e1d] ...
	I0327 16:44:43.683783    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bfb39e97e1d"
	I0327 16:44:43.700460    8800 logs.go:123] Gathering logs for kube-controller-manager [385805912c59] ...
	I0327 16:44:43.700471    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385805912c59"
	I0327 16:44:43.720347    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:44:43.720361    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:44:43.738508    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:44:43.738523    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:44:43.743336    8800 logs.go:123] Gathering logs for etcd [4f67bfb30b49] ...
	I0327 16:44:43.743348    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f67bfb30b49"
	I0327 16:44:46.259231    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:44:44.171809    8959 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.195280458s)
	I0327 16:44:44.171822    8959 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0327 16:44:44.187305    8959 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0327 16:44:44.190077    8959 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0327 16:44:44.195267    8959 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 16:44:44.271281    8959 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0327 16:44:45.855638    8959 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.584384583s)
	I0327 16:44:45.855730    8959 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0327 16:44:45.867305    8959 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0327 16:44:45.867315    8959 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0327 16:44:45.867321    8959 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0327 16:44:45.877073    8959 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0327 16:44:45.877211    8959 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0327 16:44:45.877337    8959 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0327 16:44:45.877403    8959 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0327 16:44:45.877458    8959 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0327 16:44:45.877626    8959 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0327 16:44:45.877634    8959 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 16:44:45.877927    8959 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0327 16:44:45.887642    8959 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0327 16:44:45.887718    8959 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0327 16:44:45.887777    8959 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 16:44:45.887839    8959 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0327 16:44:45.887942    8959 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0327 16:44:45.888012    8959 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0327 16:44:45.888173    8959 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0327 16:44:45.888354    8959 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0327 16:44:47.888858    8959 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0327 16:44:47.926996    8959 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0327 16:44:47.927051    8959 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0327 16:44:47.927146    8959 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0327 16:44:47.940997    8959 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0327 16:44:47.949909    8959 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0327 16:44:47.961764    8959 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0327 16:44:47.961801    8959 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0327 16:44:47.961865    8959 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0327 16:44:47.974282    8959 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W0327 16:44:47.992103    8959 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0327 16:44:47.992221    8959 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0327 16:44:47.992245    8959 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0327 16:44:48.004558    8959 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0327 16:44:48.006260    8959 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0327 16:44:48.006464    8959 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0327 16:44:48.006482    8959 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0327 16:44:48.006495    8959 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0327 16:44:48.006519    8959 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0327 16:44:48.006483    8959 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0327 16:44:48.006604    8959 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0327 16:44:48.014206    8959 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0327 16:44:48.017145    8959 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0327 16:44:48.017165    8959 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0327 16:44:48.017206    8959 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0327 16:44:48.029505    8959 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0327 16:44:48.029527    8959 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0327 16:44:48.029592    8959 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0327 16:44:48.037182    8959 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0327 16:44:48.039748    8959 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0327 16:44:48.039847    8959 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0327 16:44:48.045309    8959 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0327 16:44:48.045332    8959 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0327 16:44:48.045391    8959 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0327 16:44:48.048405    8959 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0327 16:44:48.048504    8959 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0327 16:44:48.055411    8959 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0327 16:44:48.055449    8959 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0327 16:44:48.055462    8959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0327 16:44:48.070704    8959 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0327 16:44:48.070758    8959 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0327 16:44:48.070773    8959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0327 16:44:48.089676    8959 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0327 16:44:48.089690    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0327 16:44:48.125799    8959 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0327 16:44:48.125821    8959 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0327 16:44:48.125834    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0327 16:44:48.162138    8959 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0327 16:44:51.261208    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:44:51.261358    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:44:51.273836    8800 logs.go:276] 2 containers: [0703d422a652 126b9553cb40]
	I0327 16:44:51.273920    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:44:51.286163    8800 logs.go:276] 2 containers: [4f67bfb30b49 5429db75cd75]
	I0327 16:44:51.286232    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:44:51.298484    8800 logs.go:276] 1 containers: [2ef14eacffaa]
	I0327 16:44:51.298559    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:44:51.309677    8800 logs.go:276] 2 containers: [b68fec8d8767 c0c992e34b5d]
	I0327 16:44:51.309778    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:44:51.324122    8800 logs.go:276] 1 containers: [2bfb39e97e1d]
	I0327 16:44:51.324195    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:44:51.335123    8800 logs.go:276] 2 containers: [385805912c59 1a8c10b8da56]
	I0327 16:44:51.335189    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:44:51.346040    8800 logs.go:276] 0 containers: []
	W0327 16:44:51.346051    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:44:51.346107    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:44:51.359785    8800 logs.go:276] 2 containers: [b4f5d0ba182b 579fe35ef076]
	I0327 16:44:51.359852    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:44:51.359860    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:44:51.372720    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:44:51.372733    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:44:51.377293    8800 logs.go:123] Gathering logs for kube-apiserver [126b9553cb40] ...
	I0327 16:44:51.377302    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126b9553cb40"
	I0327 16:44:51.397271    8800 logs.go:123] Gathering logs for etcd [4f67bfb30b49] ...
	I0327 16:44:51.397282    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f67bfb30b49"
	I0327 16:44:51.412193    8800 logs.go:123] Gathering logs for coredns [2ef14eacffaa] ...
	I0327 16:44:51.412204    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef14eacffaa"
	I0327 16:44:51.423674    8800 logs.go:123] Gathering logs for kube-proxy [2bfb39e97e1d] ...
	I0327 16:44:51.423686    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bfb39e97e1d"
	I0327 16:44:51.435661    8800 logs.go:123] Gathering logs for kube-controller-manager [385805912c59] ...
	I0327 16:44:51.435674    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385805912c59"
	I0327 16:44:51.455041    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:44:51.455060    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:44:51.493917    8800 logs.go:123] Gathering logs for kube-controller-manager [1a8c10b8da56] ...
	I0327 16:44:51.493945    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a8c10b8da56"
	I0327 16:44:51.511516    8800 logs.go:123] Gathering logs for storage-provisioner [579fe35ef076] ...
	I0327 16:44:51.511528    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 579fe35ef076"
	I0327 16:44:51.523712    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:44:51.523723    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:44:51.548365    8800 logs.go:123] Gathering logs for kube-apiserver [0703d422a652] ...
	I0327 16:44:51.548380    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0703d422a652"
	I0327 16:44:51.562482    8800 logs.go:123] Gathering logs for storage-provisioner [b4f5d0ba182b] ...
	I0327 16:44:51.562492    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f5d0ba182b"
	I0327 16:44:51.574169    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:44:51.574181    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:44:51.610426    8800 logs.go:123] Gathering logs for etcd [5429db75cd75] ...
	I0327 16:44:51.610439    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5429db75cd75"
	I0327 16:44:51.625200    8800 logs.go:123] Gathering logs for kube-scheduler [b68fec8d8767] ...
	I0327 16:44:51.625216    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68fec8d8767"
	I0327 16:44:51.638122    8800 logs.go:123] Gathering logs for kube-scheduler [c0c992e34b5d] ...
	I0327 16:44:51.638132    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0c992e34b5d"
	W0327 16:44:48.460401    8959 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0327 16:44:48.460598    8959 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 16:44:48.475936    8959 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0327 16:44:48.475968    8959 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 16:44:48.476029    8959 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 16:44:48.491957    8959 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0327 16:44:48.492069    8959 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0327 16:44:48.493491    8959 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0327 16:44:48.493503    8959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0327 16:44:48.516557    8959 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0327 16:44:48.516570    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0327 16:44:48.757885    8959 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0327 16:44:48.757924    8959 cache_images.go:92] duration metric: took 2.890682292s to LoadCachedImages
	W0327 16:44:48.757964    8959 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0327 16:44:48.757972    8959 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0327 16:44:48.758017    8959 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-017000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-017000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0327 16:44:48.758076    8959 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0327 16:44:48.775574    8959 cni.go:84] Creating CNI manager for ""
	I0327 16:44:48.775586    8959 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 16:44:48.775591    8959 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0327 16:44:48.775599    8959 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-017000 NodeName:stopped-upgrade-017000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0327 16:44:48.775664    8959 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-017000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0327 16:44:48.775721    8959 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0327 16:44:48.778546    8959 binaries.go:44] Found k8s binaries, skipping transfer
	I0327 16:44:48.778577    8959 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0327 16:44:48.781443    8959 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0327 16:44:48.786738    8959 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0327 16:44:48.791488    8959 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0327 16:44:48.796779    8959 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0327 16:44:48.798114    8959 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0327 16:44:48.801671    8959 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 16:44:48.883997    8959 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 16:44:48.889091    8959 certs.go:68] Setting up /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/stopped-upgrade-017000 for IP: 10.0.2.15
	I0327 16:44:48.889099    8959 certs.go:194] generating shared ca certs ...
	I0327 16:44:48.889109    8959 certs.go:226] acquiring lock for ca certs: {Name:mkc9ab23ce08863badc46de64236358969dc1820 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 16:44:48.889265    8959 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18485-6511/.minikube/ca.key
	I0327 16:44:48.889985    8959 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18485-6511/.minikube/proxy-client-ca.key
	I0327 16:44:48.889998    8959 certs.go:256] generating profile certs ...
	I0327 16:44:48.890212    8959 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/stopped-upgrade-017000/client.key
	I0327 16:44:48.890232    8959 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/stopped-upgrade-017000/apiserver.key.8f6b5052
	I0327 16:44:48.890242    8959 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/stopped-upgrade-017000/apiserver.crt.8f6b5052 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0327 16:44:49.052840    8959 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/stopped-upgrade-017000/apiserver.crt.8f6b5052 ...
	I0327 16:44:49.052854    8959 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/stopped-upgrade-017000/apiserver.crt.8f6b5052: {Name:mk8d7707cb630a39abbe89752f9a5ea56e816c47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 16:44:49.053162    8959 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/stopped-upgrade-017000/apiserver.key.8f6b5052 ...
	I0327 16:44:49.053175    8959 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/stopped-upgrade-017000/apiserver.key.8f6b5052: {Name:mk3561b92d4c8a3b5e6623cdb8994719c866fa1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 16:44:49.053322    8959 certs.go:381] copying /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/stopped-upgrade-017000/apiserver.crt.8f6b5052 -> /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/stopped-upgrade-017000/apiserver.crt
	I0327 16:44:49.053904    8959 certs.go:385] copying /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/stopped-upgrade-017000/apiserver.key.8f6b5052 -> /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/stopped-upgrade-017000/apiserver.key
	I0327 16:44:49.054263    8959 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/stopped-upgrade-017000/proxy-client.key
	I0327 16:44:49.054445    8959 certs.go:484] found cert: /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/6926.pem (1338 bytes)
	W0327 16:44:49.054664    8959 certs.go:480] ignoring /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/6926_empty.pem, impossibly tiny 0 bytes
	I0327 16:44:49.054673    8959 certs.go:484] found cert: /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca-key.pem (1679 bytes)
	I0327 16:44:49.054700    8959 certs.go:484] found cert: /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem (1078 bytes)
	I0327 16:44:49.054720    8959 certs.go:484] found cert: /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem (1123 bytes)
	I0327 16:44:49.054738    8959 certs.go:484] found cert: /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/key.pem (1675 bytes)
	I0327 16:44:49.054777    8959 certs.go:484] found cert: /Users/jenkins/minikube-integration/18485-6511/.minikube/files/etc/ssl/certs/69262.pem (1708 bytes)
	I0327 16:44:49.055130    8959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0327 16:44:49.062039    8959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0327 16:44:49.069437    8959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0327 16:44:49.077677    8959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0327 16:44:49.085256    8959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/stopped-upgrade-017000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0327 16:44:49.092808    8959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/stopped-upgrade-017000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0327 16:44:49.099591    8959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/stopped-upgrade-017000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0327 16:44:49.106548    8959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/stopped-upgrade-017000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0327 16:44:49.113625    8959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/files/etc/ssl/certs/69262.pem --> /usr/share/ca-certificates/69262.pem (1708 bytes)
	I0327 16:44:49.120217    8959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0327 16:44:49.127147    8959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/6926.pem --> /usr/share/ca-certificates/6926.pem (1338 bytes)
	I0327 16:44:49.133683    8959 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0327 16:44:49.138975    8959 ssh_runner.go:195] Run: openssl version
	I0327 16:44:49.140729    8959 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0327 16:44:49.143676    8959 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0327 16:44:49.145147    8959 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:41 /usr/share/ca-certificates/minikubeCA.pem
	I0327 16:44:49.145171    8959 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0327 16:44:49.147044    8959 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0327 16:44:49.150049    8959 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6926.pem && ln -fs /usr/share/ca-certificates/6926.pem /etc/ssl/certs/6926.pem"
	I0327 16:44:49.153484    8959 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6926.pem
	I0327 16:44:49.155013    8959 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:28 /usr/share/ca-certificates/6926.pem
	I0327 16:44:49.155035    8959 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6926.pem
	I0327 16:44:49.156799    8959 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6926.pem /etc/ssl/certs/51391683.0"
	I0327 16:44:49.160018    8959 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69262.pem && ln -fs /usr/share/ca-certificates/69262.pem /etc/ssl/certs/69262.pem"
	I0327 16:44:49.162818    8959 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/69262.pem
	I0327 16:44:49.164125    8959 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:28 /usr/share/ca-certificates/69262.pem
	I0327 16:44:49.164142    8959 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69262.pem
	I0327 16:44:49.165933    8959 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69262.pem /etc/ssl/certs/3ec20f2e.0"
	I0327 16:44:49.169192    8959 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0327 16:44:49.170996    8959 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0327 16:44:49.172914    8959 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0327 16:44:49.175001    8959 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0327 16:44:49.176872    8959 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0327 16:44:49.178871    8959 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0327 16:44:49.180572    8959 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0327 16:44:49.182415    8959 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-017000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51421 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-017000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0327 16:44:49.182484    8959 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0327 16:44:49.192603    8959 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0327 16:44:49.195939    8959 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0327 16:44:49.195944    8959 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0327 16:44:49.195947    8959 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0327 16:44:49.195968    8959 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0327 16:44:49.198779    8959 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0327 16:44:49.199070    8959 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-017000" does not appear in /Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:44:49.199165    8959 kubeconfig.go:62] /Users/jenkins/minikube-integration/18485-6511/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-017000" cluster setting kubeconfig missing "stopped-upgrade-017000" context setting]
	I0327 16:44:49.199376    8959 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18485-6511/kubeconfig: {Name:mke46d0809919cfbe0118c5110926d6ce61bf373 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 16:44:49.199807    8959 kapi.go:59] client config for stopped-upgrade-017000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/stopped-upgrade-017000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/stopped-upgrade-017000/client.key", CAFile:"/Users/jenkins/minikube-integration/18485-6511/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103b96c70), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0327 16:44:49.200224    8959 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0327 16:44:49.202921    8959 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-017000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0327 16:44:49.202927    8959 kubeadm.go:1154] stopping kube-system containers ...
	I0327 16:44:49.202972    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0327 16:44:49.214145    8959 docker.go:483] Stopping containers: [f76badbaa6c8 c581a3f09ed3 56ea780761c8 c482501fc6ea e20a2e974eba 259c6c590ab2 32d18ef2c823 9262298e88bb]
	I0327 16:44:49.214211    8959 ssh_runner.go:195] Run: docker stop f76badbaa6c8 c581a3f09ed3 56ea780761c8 c482501fc6ea e20a2e974eba 259c6c590ab2 32d18ef2c823 9262298e88bb
	I0327 16:44:49.225042    8959 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0327 16:44:49.230321    8959 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0327 16:44:49.233363    8959 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0327 16:44:49.233375    8959 kubeadm.go:156] found existing configuration files:
	
	I0327 16:44:49.233399    8959 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51421 /etc/kubernetes/admin.conf
	I0327 16:44:49.236419    8959 kubeadm.go:162] "https://control-plane.minikube.internal:51421" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51421 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0327 16:44:49.236443    8959 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0327 16:44:49.238959    8959 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51421 /etc/kubernetes/kubelet.conf
	I0327 16:44:49.241445    8959 kubeadm.go:162] "https://control-plane.minikube.internal:51421" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51421 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0327 16:44:49.241465    8959 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0327 16:44:49.244386    8959 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51421 /etc/kubernetes/controller-manager.conf
	I0327 16:44:49.246798    8959 kubeadm.go:162] "https://control-plane.minikube.internal:51421" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51421 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0327 16:44:49.246821    8959 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0327 16:44:49.249560    8959 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51421 /etc/kubernetes/scheduler.conf
	I0327 16:44:49.252519    8959 kubeadm.go:162] "https://control-plane.minikube.internal:51421" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51421 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0327 16:44:49.252539    8959 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0327 16:44:49.255273    8959 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0327 16:44:49.257845    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0327 16:44:49.281301    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0327 16:44:50.067492    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0327 16:44:50.200876    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0327 16:44:50.223766    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0327 16:44:50.249021    8959 api_server.go:52] waiting for apiserver process to appear ...
	I0327 16:44:50.249268    8959 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 16:44:50.751187    8959 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 16:44:51.251149    8959 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 16:44:51.255304    8959 api_server.go:72] duration metric: took 1.006313958s to wait for apiserver process to appear ...
	I0327 16:44:51.255315    8959 api_server.go:88] waiting for apiserver healthz status ...
	I0327 16:44:51.255328    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:44:54.155749    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:44:56.257358    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:44:56.257395    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:44:59.157943    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:44:59.158116    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:44:59.170434    8800 logs.go:276] 2 containers: [0703d422a652 126b9553cb40]
	I0327 16:44:59.170506    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:44:59.181396    8800 logs.go:276] 2 containers: [4f67bfb30b49 5429db75cd75]
	I0327 16:44:59.181460    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:44:59.192251    8800 logs.go:276] 1 containers: [2ef14eacffaa]
	I0327 16:44:59.192316    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:44:59.207027    8800 logs.go:276] 2 containers: [b68fec8d8767 c0c992e34b5d]
	I0327 16:44:59.207100    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:44:59.217655    8800 logs.go:276] 1 containers: [2bfb39e97e1d]
	I0327 16:44:59.217721    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:44:59.228492    8800 logs.go:276] 2 containers: [385805912c59 1a8c10b8da56]
	I0327 16:44:59.228561    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:44:59.239307    8800 logs.go:276] 0 containers: []
	W0327 16:44:59.239316    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:44:59.239370    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:44:59.250430    8800 logs.go:276] 2 containers: [b4f5d0ba182b 579fe35ef076]
	I0327 16:44:59.250450    8800 logs.go:123] Gathering logs for kube-proxy [2bfb39e97e1d] ...
	I0327 16:44:59.250455    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bfb39e97e1d"
	I0327 16:44:59.265994    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:44:59.266004    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:44:59.303226    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:44:59.303237    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:44:59.308277    8800 logs.go:123] Gathering logs for coredns [2ef14eacffaa] ...
	I0327 16:44:59.308284    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef14eacffaa"
	I0327 16:44:59.319589    8800 logs.go:123] Gathering logs for kube-scheduler [b68fec8d8767] ...
	I0327 16:44:59.319599    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68fec8d8767"
	I0327 16:44:59.336118    8800 logs.go:123] Gathering logs for kube-scheduler [c0c992e34b5d] ...
	I0327 16:44:59.336129    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0c992e34b5d"
	I0327 16:44:59.351412    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:44:59.351423    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:44:59.386088    8800 logs.go:123] Gathering logs for storage-provisioner [579fe35ef076] ...
	I0327 16:44:59.386102    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 579fe35ef076"
	I0327 16:44:59.398409    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:44:59.398421    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:44:59.422038    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:44:59.422048    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:44:59.433697    8800 logs.go:123] Gathering logs for kube-apiserver [126b9553cb40] ...
	I0327 16:44:59.433708    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126b9553cb40"
	I0327 16:44:59.454263    8800 logs.go:123] Gathering logs for etcd [4f67bfb30b49] ...
	I0327 16:44:59.454273    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f67bfb30b49"
	I0327 16:44:59.468346    8800 logs.go:123] Gathering logs for kube-controller-manager [385805912c59] ...
	I0327 16:44:59.468357    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385805912c59"
	I0327 16:44:59.486110    8800 logs.go:123] Gathering logs for kube-controller-manager [1a8c10b8da56] ...
	I0327 16:44:59.486121    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a8c10b8da56"
	I0327 16:44:59.501896    8800 logs.go:123] Gathering logs for storage-provisioner [b4f5d0ba182b] ...
	I0327 16:44:59.501908    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f5d0ba182b"
	I0327 16:44:59.512959    8800 logs.go:123] Gathering logs for kube-apiserver [0703d422a652] ...
	I0327 16:44:59.512970    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0703d422a652"
	I0327 16:44:59.526624    8800 logs.go:123] Gathering logs for etcd [5429db75cd75] ...
	I0327 16:44:59.526635    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5429db75cd75"
	I0327 16:45:02.043106    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:45:01.257489    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:45:01.257524    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:45:07.045245    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:45:07.045340    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:45:07.057860    8800 logs.go:276] 2 containers: [0703d422a652 126b9553cb40]
	I0327 16:45:07.057939    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:45:07.068250    8800 logs.go:276] 2 containers: [4f67bfb30b49 5429db75cd75]
	I0327 16:45:07.068324    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:45:07.079162    8800 logs.go:276] 1 containers: [2ef14eacffaa]
	I0327 16:45:07.079227    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:45:07.089934    8800 logs.go:276] 2 containers: [b68fec8d8767 c0c992e34b5d]
	I0327 16:45:07.090008    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:45:07.100482    8800 logs.go:276] 1 containers: [2bfb39e97e1d]
	I0327 16:45:07.100551    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:45:07.113708    8800 logs.go:276] 2 containers: [385805912c59 1a8c10b8da56]
	I0327 16:45:07.113777    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:45:07.124607    8800 logs.go:276] 0 containers: []
	W0327 16:45:07.124622    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:45:07.124686    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:45:07.135818    8800 logs.go:276] 2 containers: [b4f5d0ba182b 579fe35ef076]
	I0327 16:45:07.135837    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:45:07.135875    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:45:07.140561    8800 logs.go:123] Gathering logs for coredns [2ef14eacffaa] ...
	I0327 16:45:07.140566    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef14eacffaa"
	I0327 16:45:07.151832    8800 logs.go:123] Gathering logs for kube-scheduler [b68fec8d8767] ...
	I0327 16:45:07.151843    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68fec8d8767"
	I0327 16:45:07.164346    8800 logs.go:123] Gathering logs for kube-scheduler [c0c992e34b5d] ...
	I0327 16:45:07.164357    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0c992e34b5d"
	I0327 16:45:07.179707    8800 logs.go:123] Gathering logs for kube-controller-manager [1a8c10b8da56] ...
	I0327 16:45:07.179721    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a8c10b8da56"
	I0327 16:45:07.194815    8800 logs.go:123] Gathering logs for storage-provisioner [b4f5d0ba182b] ...
	I0327 16:45:07.194831    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f5d0ba182b"
	I0327 16:45:07.206765    8800 logs.go:123] Gathering logs for storage-provisioner [579fe35ef076] ...
	I0327 16:45:07.206775    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 579fe35ef076"
	I0327 16:45:07.218613    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:45:07.218627    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:45:07.254703    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:45:07.254713    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:45:07.267006    8800 logs.go:123] Gathering logs for kube-apiserver [0703d422a652] ...
	I0327 16:45:07.267016    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0703d422a652"
	I0327 16:45:07.281908    8800 logs.go:123] Gathering logs for etcd [4f67bfb30b49] ...
	I0327 16:45:07.281919    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f67bfb30b49"
	I0327 16:45:07.295456    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:45:07.295466    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:45:07.319150    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:45:07.319157    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:45:07.353470    8800 logs.go:123] Gathering logs for etcd [5429db75cd75] ...
	I0327 16:45:07.353481    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5429db75cd75"
	I0327 16:45:07.367731    8800 logs.go:123] Gathering logs for kube-apiserver [126b9553cb40] ...
	I0327 16:45:07.367746    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126b9553cb40"
	I0327 16:45:07.394447    8800 logs.go:123] Gathering logs for kube-controller-manager [385805912c59] ...
	I0327 16:45:07.394460    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385805912c59"
	I0327 16:45:07.412717    8800 logs.go:123] Gathering logs for kube-proxy [2bfb39e97e1d] ...
	I0327 16:45:07.412727    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bfb39e97e1d"
	I0327 16:45:06.257680    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:45:06.257734    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:45:09.926247    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:45:11.258063    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:45:11.258118    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:45:14.928303    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:45:14.928444    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:45:14.943803    8800 logs.go:276] 2 containers: [0703d422a652 126b9553cb40]
	I0327 16:45:14.943888    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:45:14.956333    8800 logs.go:276] 2 containers: [4f67bfb30b49 5429db75cd75]
	I0327 16:45:14.956414    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:45:14.966599    8800 logs.go:276] 1 containers: [2ef14eacffaa]
	I0327 16:45:14.966661    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:45:14.977174    8800 logs.go:276] 2 containers: [b68fec8d8767 c0c992e34b5d]
	I0327 16:45:14.977244    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:45:14.987092    8800 logs.go:276] 1 containers: [2bfb39e97e1d]
	I0327 16:45:14.987157    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:45:14.997128    8800 logs.go:276] 2 containers: [385805912c59 1a8c10b8da56]
	I0327 16:45:14.997197    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:45:15.007246    8800 logs.go:276] 0 containers: []
	W0327 16:45:15.007261    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:45:15.007322    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:45:15.018155    8800 logs.go:276] 2 containers: [b4f5d0ba182b 579fe35ef076]
	I0327 16:45:15.018176    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:45:15.018181    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:45:15.055488    8800 logs.go:123] Gathering logs for etcd [5429db75cd75] ...
	I0327 16:45:15.055502    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5429db75cd75"
	I0327 16:45:15.070025    8800 logs.go:123] Gathering logs for kube-proxy [2bfb39e97e1d] ...
	I0327 16:45:15.070039    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bfb39e97e1d"
	I0327 16:45:15.081741    8800 logs.go:123] Gathering logs for kube-controller-manager [385805912c59] ...
	I0327 16:45:15.081751    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385805912c59"
	I0327 16:45:15.098684    8800 logs.go:123] Gathering logs for storage-provisioner [579fe35ef076] ...
	I0327 16:45:15.098698    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 579fe35ef076"
	I0327 16:45:15.116608    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:45:15.116618    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:45:15.141954    8800 logs.go:123] Gathering logs for kube-apiserver [0703d422a652] ...
	I0327 16:45:15.141964    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0703d422a652"
	I0327 16:45:15.155535    8800 logs.go:123] Gathering logs for etcd [4f67bfb30b49] ...
	I0327 16:45:15.155545    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f67bfb30b49"
	I0327 16:45:15.171195    8800 logs.go:123] Gathering logs for kube-controller-manager [1a8c10b8da56] ...
	I0327 16:45:15.171203    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a8c10b8da56"
	I0327 16:45:15.187004    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:45:15.187016    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:45:15.198938    8800 logs.go:123] Gathering logs for kube-scheduler [b68fec8d8767] ...
	I0327 16:45:15.198949    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68fec8d8767"
	I0327 16:45:15.210632    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:45:15.210642    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:45:15.247025    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:45:15.247041    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:45:15.251794    8800 logs.go:123] Gathering logs for kube-apiserver [126b9553cb40] ...
	I0327 16:45:15.251803    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126b9553cb40"
	I0327 16:45:15.270966    8800 logs.go:123] Gathering logs for coredns [2ef14eacffaa] ...
	I0327 16:45:15.270979    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef14eacffaa"
	I0327 16:45:15.282476    8800 logs.go:123] Gathering logs for kube-scheduler [c0c992e34b5d] ...
	I0327 16:45:15.282487    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0c992e34b5d"
	I0327 16:45:15.297295    8800 logs.go:123] Gathering logs for storage-provisioner [b4f5d0ba182b] ...
	I0327 16:45:15.297305    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f5d0ba182b"
	I0327 16:45:17.810610    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:45:16.258546    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:45:16.258609    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:45:22.813095    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:45:22.813492    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:45:22.843830    8800 logs.go:276] 2 containers: [0703d422a652 126b9553cb40]
	I0327 16:45:22.843961    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:45:22.863226    8800 logs.go:276] 2 containers: [4f67bfb30b49 5429db75cd75]
	I0327 16:45:22.863326    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:45:21.259368    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:45:21.259478    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:45:22.877291    8800 logs.go:276] 1 containers: [2ef14eacffaa]
	I0327 16:45:22.877366    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:45:22.889514    8800 logs.go:276] 2 containers: [b68fec8d8767 c0c992e34b5d]
	I0327 16:45:22.889580    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:45:22.900865    8800 logs.go:276] 1 containers: [2bfb39e97e1d]
	I0327 16:45:22.900956    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:45:22.917172    8800 logs.go:276] 2 containers: [385805912c59 1a8c10b8da56]
	I0327 16:45:22.917246    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:45:22.927991    8800 logs.go:276] 0 containers: []
	W0327 16:45:22.928003    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:45:22.928059    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:45:22.938382    8800 logs.go:276] 2 containers: [b4f5d0ba182b 579fe35ef076]
	I0327 16:45:22.938399    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:45:22.938404    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:45:22.955198    8800 logs.go:123] Gathering logs for kube-apiserver [0703d422a652] ...
	I0327 16:45:22.955208    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0703d422a652"
	I0327 16:45:22.969291    8800 logs.go:123] Gathering logs for etcd [4f67bfb30b49] ...
	I0327 16:45:22.969301    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f67bfb30b49"
	I0327 16:45:22.982915    8800 logs.go:123] Gathering logs for etcd [5429db75cd75] ...
	I0327 16:45:22.982927    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5429db75cd75"
	I0327 16:45:22.997102    8800 logs.go:123] Gathering logs for kube-scheduler [b68fec8d8767] ...
	I0327 16:45:22.997113    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68fec8d8767"
	I0327 16:45:23.008823    8800 logs.go:123] Gathering logs for kube-scheduler [c0c992e34b5d] ...
	I0327 16:45:23.008834    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0c992e34b5d"
	I0327 16:45:23.024338    8800 logs.go:123] Gathering logs for kube-controller-manager [385805912c59] ...
	I0327 16:45:23.024351    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385805912c59"
	I0327 16:45:23.042048    8800 logs.go:123] Gathering logs for storage-provisioner [b4f5d0ba182b] ...
	I0327 16:45:23.042059    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f5d0ba182b"
	I0327 16:45:23.054164    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:45:23.054177    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:45:23.059182    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:45:23.059188    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:45:23.094349    8800 logs.go:123] Gathering logs for kube-apiserver [126b9553cb40] ...
	I0327 16:45:23.094359    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126b9553cb40"
	I0327 16:45:23.115082    8800 logs.go:123] Gathering logs for coredns [2ef14eacffaa] ...
	I0327 16:45:23.115091    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef14eacffaa"
	I0327 16:45:23.130494    8800 logs.go:123] Gathering logs for kube-controller-manager [1a8c10b8da56] ...
	I0327 16:45:23.130503    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a8c10b8da56"
	I0327 16:45:23.145266    8800 logs.go:123] Gathering logs for storage-provisioner [579fe35ef076] ...
	I0327 16:45:23.145274    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 579fe35ef076"
	I0327 16:45:23.156896    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:45:23.156909    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:45:23.179326    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:45:23.179333    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:45:23.216009    8800 logs.go:123] Gathering logs for kube-proxy [2bfb39e97e1d] ...
	I0327 16:45:23.216019    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bfb39e97e1d"
	I0327 16:45:25.730334    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:45:26.260652    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:45:26.260700    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:45:30.730641    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:45:30.730802    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:45:30.745315    8800 logs.go:276] 2 containers: [0703d422a652 126b9553cb40]
	I0327 16:45:30.745389    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:45:30.760521    8800 logs.go:276] 2 containers: [4f67bfb30b49 5429db75cd75]
	I0327 16:45:30.760593    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:45:30.771151    8800 logs.go:276] 1 containers: [2ef14eacffaa]
	I0327 16:45:30.771219    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:45:30.781349    8800 logs.go:276] 2 containers: [b68fec8d8767 c0c992e34b5d]
	I0327 16:45:30.781418    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:45:30.791662    8800 logs.go:276] 1 containers: [2bfb39e97e1d]
	I0327 16:45:30.791732    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:45:30.802422    8800 logs.go:276] 2 containers: [385805912c59 1a8c10b8da56]
	I0327 16:45:30.802497    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:45:30.815784    8800 logs.go:276] 0 containers: []
	W0327 16:45:30.815796    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:45:30.815860    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:45:30.826292    8800 logs.go:276] 2 containers: [b4f5d0ba182b 579fe35ef076]
	I0327 16:45:30.826311    8800 logs.go:123] Gathering logs for kube-scheduler [b68fec8d8767] ...
	I0327 16:45:30.826316    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b68fec8d8767"
	I0327 16:45:30.838561    8800 logs.go:123] Gathering logs for kube-scheduler [c0c992e34b5d] ...
	I0327 16:45:30.838574    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c0c992e34b5d"
	I0327 16:45:30.853744    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:45:30.853754    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:45:30.865754    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:45:30.865765    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:45:30.901285    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:45:30.901297    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:45:30.905825    8800 logs.go:123] Gathering logs for etcd [5429db75cd75] ...
	I0327 16:45:30.905833    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5429db75cd75"
	I0327 16:45:30.920460    8800 logs.go:123] Gathering logs for coredns [2ef14eacffaa] ...
	I0327 16:45:30.920471    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ef14eacffaa"
	I0327 16:45:30.932235    8800 logs.go:123] Gathering logs for kube-proxy [2bfb39e97e1d] ...
	I0327 16:45:30.932247    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2bfb39e97e1d"
	I0327 16:45:30.943515    8800 logs.go:123] Gathering logs for storage-provisioner [b4f5d0ba182b] ...
	I0327 16:45:30.943525    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b4f5d0ba182b"
	I0327 16:45:30.954664    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:45:30.954674    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:45:30.990168    8800 logs.go:123] Gathering logs for kube-apiserver [126b9553cb40] ...
	I0327 16:45:30.990179    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 126b9553cb40"
	I0327 16:45:31.009255    8800 logs.go:123] Gathering logs for storage-provisioner [579fe35ef076] ...
	I0327 16:45:31.009270    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 579fe35ef076"
	I0327 16:45:31.020735    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:45:31.020745    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:45:31.043527    8800 logs.go:123] Gathering logs for kube-apiserver [0703d422a652] ...
	I0327 16:45:31.043536    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0703d422a652"
	I0327 16:45:31.059253    8800 logs.go:123] Gathering logs for etcd [4f67bfb30b49] ...
	I0327 16:45:31.059263    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f67bfb30b49"
	I0327 16:45:31.074438    8800 logs.go:123] Gathering logs for kube-controller-manager [385805912c59] ...
	I0327 16:45:31.074446    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 385805912c59"
	I0327 16:45:31.092373    8800 logs.go:123] Gathering logs for kube-controller-manager [1a8c10b8da56] ...
	I0327 16:45:31.092382    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1a8c10b8da56"
	I0327 16:45:31.259941    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:45:31.259988    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:45:33.610103    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:45:36.259579    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:45:36.259629    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:45:38.610598    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:45:38.610690    8800 kubeadm.go:591] duration metric: took 4m4.356742292s to restartPrimaryControlPlane
	W0327 16:45:38.610773    8800 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0327 16:45:38.610807    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0327 16:45:39.603634    8800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0327 16:45:39.608463    8800 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0327 16:45:39.611295    8800 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0327 16:45:39.613968    8800 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0327 16:45:39.613974    8800 kubeadm.go:156] found existing configuration files:
	
	I0327 16:45:39.613997    8800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51212 /etc/kubernetes/admin.conf
	I0327 16:45:39.616552    8800 kubeadm.go:162] "https://control-plane.minikube.internal:51212" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51212 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0327 16:45:39.616574    8800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0327 16:45:39.618952    8800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51212 /etc/kubernetes/kubelet.conf
	I0327 16:45:39.621729    8800 kubeadm.go:162] "https://control-plane.minikube.internal:51212" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51212 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0327 16:45:39.621749    8800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0327 16:45:39.624674    8800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51212 /etc/kubernetes/controller-manager.conf
	I0327 16:45:39.627140    8800 kubeadm.go:162] "https://control-plane.minikube.internal:51212" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51212 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0327 16:45:39.627160    8800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0327 16:45:39.629830    8800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51212 /etc/kubernetes/scheduler.conf
	I0327 16:45:39.633382    8800 kubeadm.go:162] "https://control-plane.minikube.internal:51212" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51212 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0327 16:45:39.633437    8800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0327 16:45:39.636958    8800 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0327 16:45:39.656269    8800 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0327 16:45:39.656298    8800 kubeadm.go:309] [preflight] Running pre-flight checks
	I0327 16:45:39.716195    8800 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0327 16:45:39.716251    8800 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0327 16:45:39.716298    8800 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0327 16:45:39.765604    8800 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0327 16:45:39.773631    8800 out.go:204]   - Generating certificates and keys ...
	I0327 16:45:39.773665    8800 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0327 16:45:39.773700    8800 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0327 16:45:39.773745    8800 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0327 16:45:39.773844    8800 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0327 16:45:39.773931    8800 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0327 16:45:39.773990    8800 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0327 16:45:39.774161    8800 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0327 16:45:39.774289    8800 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0327 16:45:39.774411    8800 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0327 16:45:39.774454    8800 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0327 16:45:39.774479    8800 kubeadm.go:309] [certs] Using the existing "sa" key
	I0327 16:45:39.774509    8800 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0327 16:45:39.864151    8800 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0327 16:45:40.053392    8800 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0327 16:45:40.133239    8800 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0327 16:45:40.216430    8800 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0327 16:45:40.247916    8800 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0327 16:45:40.248606    8800 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0327 16:45:40.248647    8800 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0327 16:45:40.320023    8800 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0327 16:45:40.324355    8800 out.go:204]   - Booting up control plane ...
	I0327 16:45:40.324403    8800 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0327 16:45:40.324451    8800 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0327 16:45:40.325574    8800 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0327 16:45:40.325618    8800 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0327 16:45:40.325730    8800 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0327 16:45:41.258741    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:45:41.258768    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:45:44.830820    8800 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.508999 seconds
	I0327 16:45:44.830941    8800 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0327 16:45:44.836767    8800 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0327 16:45:45.351288    8800 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0327 16:45:45.351677    8800 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-400000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0327 16:45:45.855964    8800 kubeadm.go:309] [bootstrap-token] Using token: 3t2mm1.7phrwooo7ncwiu6l
	I0327 16:45:45.859911    8800 out.go:204]   - Configuring RBAC rules ...
	I0327 16:45:45.859974    8800 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0327 16:45:45.860026    8800 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0327 16:45:45.865274    8800 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0327 16:45:45.866092    8800 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0327 16:45:45.867030    8800 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0327 16:45:45.867763    8800 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0327 16:45:45.875239    8800 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0327 16:45:46.060352    8800 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0327 16:45:46.260141    8800 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0327 16:45:46.260707    8800 kubeadm.go:309] 
	I0327 16:45:46.260740    8800 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0327 16:45:46.260743    8800 kubeadm.go:309] 
	I0327 16:45:46.260786    8800 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0327 16:45:46.260789    8800 kubeadm.go:309] 
	I0327 16:45:46.260802    8800 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0327 16:45:46.260831    8800 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0327 16:45:46.260936    8800 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0327 16:45:46.260940    8800 kubeadm.go:309] 
	I0327 16:45:46.260969    8800 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0327 16:45:46.260975    8800 kubeadm.go:309] 
	I0327 16:45:46.261003    8800 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0327 16:45:46.261006    8800 kubeadm.go:309] 
	I0327 16:45:46.261039    8800 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0327 16:45:46.261096    8800 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0327 16:45:46.261144    8800 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0327 16:45:46.261150    8800 kubeadm.go:309] 
	I0327 16:45:46.261209    8800 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0327 16:45:46.261255    8800 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0327 16:45:46.261260    8800 kubeadm.go:309] 
	I0327 16:45:46.261303    8800 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 3t2mm1.7phrwooo7ncwiu6l \
	I0327 16:45:46.261355    8800 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8047b7e049f0384af96cc555849ef1f992fa8884768aff95c9a460200a82d884 \
	I0327 16:45:46.261370    8800 kubeadm.go:309] 	--control-plane 
	I0327 16:45:46.261372    8800 kubeadm.go:309] 
	I0327 16:45:46.261415    8800 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0327 16:45:46.261421    8800 kubeadm.go:309] 
	I0327 16:45:46.261458    8800 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 3t2mm1.7phrwooo7ncwiu6l \
	I0327 16:45:46.261507    8800 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8047b7e049f0384af96cc555849ef1f992fa8884768aff95c9a460200a82d884 
	I0327 16:45:46.261559    8800 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0327 16:45:46.261566    8800 cni.go:84] Creating CNI manager for ""
	I0327 16:45:46.261574    8800 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 16:45:46.265476    8800 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0327 16:45:46.271371    8800 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0327 16:45:46.274566    8800 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0327 16:45:46.279650    8800 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0327 16:45:46.279689    8800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 16:45:46.279714    8800 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-400000 minikube.k8s.io/updated_at=2024_03_27T16_45_46_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d minikube.k8s.io/name=running-upgrade-400000 minikube.k8s.io/primary=true
	I0327 16:45:46.335060    8800 kubeadm.go:1107] duration metric: took 55.414375ms to wait for elevateKubeSystemPrivileges
	I0327 16:45:46.335079    8800 ops.go:34] apiserver oom_adj: -16
	W0327 16:45:46.335085    8800 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0327 16:45:46.335087    8800 kubeadm.go:393] duration metric: took 4m12.096700125s to StartCluster
	I0327 16:45:46.335098    8800 settings.go:142] acquiring lock: {Name:mk7a184fa834ec55a805b998fd083319e6561206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 16:45:46.335261    8800 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:45:46.335694    8800 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18485-6511/kubeconfig: {Name:mke46d0809919cfbe0118c5110926d6ce61bf373 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 16:45:46.335905    8800 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 16:45:46.340385    8800 out.go:177] * Verifying Kubernetes components...
	I0327 16:45:46.335926    8800 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0327 16:45:46.336091    8800 config.go:182] Loaded profile config "running-upgrade-400000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 16:45:46.347430    8800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 16:45:46.347434    8800 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-400000"
	I0327 16:45:46.347434    8800 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-400000"
	I0327 16:45:46.347470    8800 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-400000"
	I0327 16:45:46.347475    8800 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-400000"
	W0327 16:45:46.347481    8800 addons.go:243] addon storage-provisioner should already be in state true
	I0327 16:45:46.347500    8800 host.go:66] Checking if "running-upgrade-400000" exists ...
	I0327 16:45:46.348587    8800 kapi.go:59] client config for running-upgrade-400000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/running-upgrade-400000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/running-upgrade-400000/client.key", CAFile:"/Users/jenkins/minikube-integration/18485-6511/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1043e6c70), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0327 16:45:46.349259    8800 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-400000"
	W0327 16:45:46.349266    8800 addons.go:243] addon default-storageclass should already be in state true
	I0327 16:45:46.349273    8800 host.go:66] Checking if "running-upgrade-400000" exists ...
	I0327 16:45:46.354366    8800 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 16:45:46.360356    8800 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 16:45:46.360365    8800 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0327 16:45:46.360374    8800 sshutil.go:53] new ssh client: &{IP:localhost Port:51180 SSHKeyPath:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/running-upgrade-400000/id_rsa Username:docker}
	I0327 16:45:46.361215    8800 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0327 16:45:46.361222    8800 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0327 16:45:46.361226    8800 sshutil.go:53] new ssh client: &{IP:localhost Port:51180 SSHKeyPath:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/running-upgrade-400000/id_rsa Username:docker}
	I0327 16:45:46.425279    8800 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 16:45:46.430397    8800 api_server.go:52] waiting for apiserver process to appear ...
	I0327 16:45:46.430448    8800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 16:45:46.434470    8800 api_server.go:72] duration metric: took 98.573792ms to wait for apiserver process to appear ...
	I0327 16:45:46.434478    8800 api_server.go:88] waiting for apiserver healthz status ...
	I0327 16:45:46.434485    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:45:46.439349    8800 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 16:45:46.443169    8800 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0327 16:45:46.259730    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:45:46.259751    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:45:51.434444    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:45:51.434464    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:45:51.261006    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:45:51.261142    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:45:51.272237    8959 logs.go:276] 2 containers: [aa3afc402172 e20a2e974eba]
	I0327 16:45:51.272324    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:45:51.282809    8959 logs.go:276] 2 containers: [d74fd36a706c 56ea780761c8]
	I0327 16:45:51.282883    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:45:51.293656    8959 logs.go:276] 1 containers: [25c0222d62b6]
	I0327 16:45:51.293717    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:45:51.303876    8959 logs.go:276] 2 containers: [62111993cdda c581a3f09ed3]
	I0327 16:45:51.303936    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:45:51.314276    8959 logs.go:276] 1 containers: [0dc2c64bef5d]
	I0327 16:45:51.314343    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:45:51.325719    8959 logs.go:276] 2 containers: [fa00948ec909 f76badbaa6c8]
	I0327 16:45:51.325781    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:45:51.337154    8959 logs.go:276] 0 containers: []
	W0327 16:45:51.337166    8959 logs.go:278] No container was found matching "kindnet"
	I0327 16:45:51.337222    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:45:51.356195    8959 logs.go:276] 2 containers: [396dc8c2621b ffd8ac20b899]
	I0327 16:45:51.356212    8959 logs.go:123] Gathering logs for container status ...
	I0327 16:45:51.356218    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:45:51.369067    8959 logs.go:123] Gathering logs for etcd [d74fd36a706c] ...
	I0327 16:45:51.369078    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74fd36a706c"
	I0327 16:45:51.383007    8959 logs.go:123] Gathering logs for kube-controller-manager [fa00948ec909] ...
	I0327 16:45:51.383021    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa00948ec909"
	I0327 16:45:51.400901    8959 logs.go:123] Gathering logs for storage-provisioner [396dc8c2621b] ...
	I0327 16:45:51.400912    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396dc8c2621b"
	I0327 16:45:51.413380    8959 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:45:51.413391    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:45:51.528133    8959 logs.go:123] Gathering logs for kube-scheduler [62111993cdda] ...
	I0327 16:45:51.528147    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62111993cdda"
	I0327 16:45:51.540984    8959 logs.go:123] Gathering logs for kube-controller-manager [f76badbaa6c8] ...
	I0327 16:45:51.540994    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76badbaa6c8"
	I0327 16:45:51.555174    8959 logs.go:123] Gathering logs for storage-provisioner [ffd8ac20b899] ...
	I0327 16:45:51.555189    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffd8ac20b899"
	I0327 16:45:51.566491    8959 logs.go:123] Gathering logs for dmesg ...
	I0327 16:45:51.566502    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:45:51.571180    8959 logs.go:123] Gathering logs for kube-apiserver [aa3afc402172] ...
	I0327 16:45:51.571188    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa3afc402172"
	I0327 16:45:51.585028    8959 logs.go:123] Gathering logs for kube-scheduler [c581a3f09ed3] ...
	I0327 16:45:51.585039    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c581a3f09ed3"
	I0327 16:45:51.600186    8959 logs.go:123] Gathering logs for coredns [25c0222d62b6] ...
	I0327 16:45:51.600196    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25c0222d62b6"
	I0327 16:45:51.611616    8959 logs.go:123] Gathering logs for kube-proxy [0dc2c64bef5d] ...
	I0327 16:45:51.611626    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc2c64bef5d"
	I0327 16:45:51.623528    8959 logs.go:123] Gathering logs for Docker ...
	I0327 16:45:51.623537    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:45:51.649714    8959 logs.go:123] Gathering logs for kubelet ...
	I0327 16:45:51.649737    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0327 16:45:51.687016    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:45:51.687109    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:45:51.688114    8959 logs.go:123] Gathering logs for kube-apiserver [e20a2e974eba] ...
	I0327 16:45:51.688121    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e20a2e974eba"
	I0327 16:45:51.730301    8959 logs.go:123] Gathering logs for etcd [56ea780761c8] ...
	I0327 16:45:51.730311    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ea780761c8"
	I0327 16:45:51.745852    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:45:51.745861    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 16:45:51.745894    8959 out.go:239] X Problems detected in kubelet:
	W0327 16:45:51.745901    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:45:51.745905    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:45:51.745911    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:45:51.745913    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:45:56.434953    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:45:56.435000    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:46:01.434679    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:46:01.434700    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:46:01.747015    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:46:06.434549    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:46:06.434603    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:46:06.746949    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:46:06.747095    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:46:06.761512    8959 logs.go:276] 2 containers: [aa3afc402172 e20a2e974eba]
	I0327 16:46:06.761595    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:46:06.781524    8959 logs.go:276] 2 containers: [d74fd36a706c 56ea780761c8]
	I0327 16:46:06.781598    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:46:06.792224    8959 logs.go:276] 1 containers: [25c0222d62b6]
	I0327 16:46:06.792293    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:46:06.805526    8959 logs.go:276] 2 containers: [62111993cdda c581a3f09ed3]
	I0327 16:46:06.805598    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:46:06.816166    8959 logs.go:276] 1 containers: [0dc2c64bef5d]
	I0327 16:46:06.816235    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:46:06.828311    8959 logs.go:276] 2 containers: [fa00948ec909 f76badbaa6c8]
	I0327 16:46:06.828378    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:46:06.838597    8959 logs.go:276] 0 containers: []
	W0327 16:46:06.838610    8959 logs.go:278] No container was found matching "kindnet"
	I0327 16:46:06.838670    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:46:06.849722    8959 logs.go:276] 2 containers: [396dc8c2621b ffd8ac20b899]
	I0327 16:46:06.849739    8959 logs.go:123] Gathering logs for dmesg ...
	I0327 16:46:06.849744    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:46:06.854438    8959 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:46:06.854445    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:46:06.889652    8959 logs.go:123] Gathering logs for coredns [25c0222d62b6] ...
	I0327 16:46:06.889663    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25c0222d62b6"
	I0327 16:46:06.901312    8959 logs.go:123] Gathering logs for kube-scheduler [62111993cdda] ...
	I0327 16:46:06.901323    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62111993cdda"
	I0327 16:46:06.914598    8959 logs.go:123] Gathering logs for kube-scheduler [c581a3f09ed3] ...
	I0327 16:46:06.914609    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c581a3f09ed3"
	I0327 16:46:06.931247    8959 logs.go:123] Gathering logs for etcd [56ea780761c8] ...
	I0327 16:46:06.931258    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ea780761c8"
	I0327 16:46:06.945644    8959 logs.go:123] Gathering logs for kube-controller-manager [f76badbaa6c8] ...
	I0327 16:46:06.945654    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76badbaa6c8"
	I0327 16:46:06.960242    8959 logs.go:123] Gathering logs for storage-provisioner [396dc8c2621b] ...
	I0327 16:46:06.960252    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396dc8c2621b"
	I0327 16:46:06.972084    8959 logs.go:123] Gathering logs for Docker ...
	I0327 16:46:06.972094    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:46:06.997671    8959 logs.go:123] Gathering logs for kube-controller-manager [fa00948ec909] ...
	I0327 16:46:06.997680    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa00948ec909"
	I0327 16:46:07.015134    8959 logs.go:123] Gathering logs for kubelet ...
	I0327 16:46:07.015144    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0327 16:46:07.051258    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:46:07.051352    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:46:07.052345    8959 logs.go:123] Gathering logs for kube-apiserver [aa3afc402172] ...
	I0327 16:46:07.052351    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa3afc402172"
	I0327 16:46:07.066878    8959 logs.go:123] Gathering logs for kube-apiserver [e20a2e974eba] ...
	I0327 16:46:07.066889    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e20a2e974eba"
	I0327 16:46:07.105763    8959 logs.go:123] Gathering logs for etcd [d74fd36a706c] ...
	I0327 16:46:07.105774    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74fd36a706c"
	I0327 16:46:07.119531    8959 logs.go:123] Gathering logs for kube-proxy [0dc2c64bef5d] ...
	I0327 16:46:07.119545    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc2c64bef5d"
	I0327 16:46:07.132083    8959 logs.go:123] Gathering logs for storage-provisioner [ffd8ac20b899] ...
	I0327 16:46:07.132094    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffd8ac20b899"
	I0327 16:46:07.143524    8959 logs.go:123] Gathering logs for container status ...
	I0327 16:46:07.143536    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:46:07.157421    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:46:07.157434    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 16:46:07.157460    8959 out.go:239] X Problems detected in kubelet:
	W0327 16:46:07.157464    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:46:07.157468    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:46:07.157473    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:46:07.157475    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:46:11.434627    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:46:11.434674    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:46:16.434934    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:46:16.434955    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0327 16:46:16.795457    8800 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0327 16:46:16.799491    8800 out.go:177] * Enabled addons: storage-provisioner
	I0327 16:46:16.807265    8800 addons.go:505] duration metric: took 30.474683125s for enable addons: enabled=[storage-provisioner]
	I0327 16:46:17.159098    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:46:21.435369    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:46:21.435398    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:46:22.159617    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:46:22.159828    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:46:22.181812    8959 logs.go:276] 2 containers: [aa3afc402172 e20a2e974eba]
	I0327 16:46:22.181906    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:46:22.196094    8959 logs.go:276] 2 containers: [d74fd36a706c 56ea780761c8]
	I0327 16:46:22.196168    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:46:22.208391    8959 logs.go:276] 1 containers: [25c0222d62b6]
	I0327 16:46:22.208456    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:46:22.219379    8959 logs.go:276] 2 containers: [62111993cdda c581a3f09ed3]
	I0327 16:46:22.219456    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:46:22.230153    8959 logs.go:276] 1 containers: [0dc2c64bef5d]
	I0327 16:46:22.230220    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:46:22.245464    8959 logs.go:276] 2 containers: [fa00948ec909 f76badbaa6c8]
	I0327 16:46:22.245532    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:46:22.255680    8959 logs.go:276] 0 containers: []
	W0327 16:46:22.255691    8959 logs.go:278] No container was found matching "kindnet"
	I0327 16:46:22.255751    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:46:22.266028    8959 logs.go:276] 2 containers: [396dc8c2621b ffd8ac20b899]
	I0327 16:46:22.266047    8959 logs.go:123] Gathering logs for kubelet ...
	I0327 16:46:22.266052    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0327 16:46:22.305454    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:46:22.305565    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:46:22.306780    8959 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:46:22.306789    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:46:22.346208    8959 logs.go:123] Gathering logs for etcd [56ea780761c8] ...
	I0327 16:46:22.346221    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ea780761c8"
	I0327 16:46:22.368414    8959 logs.go:123] Gathering logs for kube-scheduler [62111993cdda] ...
	I0327 16:46:22.368427    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62111993cdda"
	I0327 16:46:22.380105    8959 logs.go:123] Gathering logs for kube-scheduler [c581a3f09ed3] ...
	I0327 16:46:22.380118    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c581a3f09ed3"
	I0327 16:46:22.397764    8959 logs.go:123] Gathering logs for kube-controller-manager [f76badbaa6c8] ...
	I0327 16:46:22.397775    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76badbaa6c8"
	I0327 16:46:22.415721    8959 logs.go:123] Gathering logs for storage-provisioner [396dc8c2621b] ...
	I0327 16:46:22.415733    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396dc8c2621b"
	I0327 16:46:22.427794    8959 logs.go:123] Gathering logs for dmesg ...
	I0327 16:46:22.427808    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:46:22.432034    8959 logs.go:123] Gathering logs for kube-apiserver [e20a2e974eba] ...
	I0327 16:46:22.432042    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e20a2e974eba"
	I0327 16:46:22.469494    8959 logs.go:123] Gathering logs for etcd [d74fd36a706c] ...
	I0327 16:46:22.469505    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74fd36a706c"
	I0327 16:46:22.483450    8959 logs.go:123] Gathering logs for kube-proxy [0dc2c64bef5d] ...
	I0327 16:46:22.483472    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc2c64bef5d"
	I0327 16:46:22.495278    8959 logs.go:123] Gathering logs for kube-controller-manager [fa00948ec909] ...
	I0327 16:46:22.495291    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa00948ec909"
	I0327 16:46:22.513185    8959 logs.go:123] Gathering logs for container status ...
	I0327 16:46:22.513196    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:46:22.526860    8959 logs.go:123] Gathering logs for kube-apiserver [aa3afc402172] ...
	I0327 16:46:22.526872    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa3afc402172"
	I0327 16:46:22.541173    8959 logs.go:123] Gathering logs for storage-provisioner [ffd8ac20b899] ...
	I0327 16:46:22.541184    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffd8ac20b899"
	I0327 16:46:22.555923    8959 logs.go:123] Gathering logs for coredns [25c0222d62b6] ...
	I0327 16:46:22.555935    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25c0222d62b6"
	I0327 16:46:22.567478    8959 logs.go:123] Gathering logs for Docker ...
	I0327 16:46:22.567489    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:46:22.592313    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:46:22.592322    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 16:46:22.592344    8959 out.go:239] X Problems detected in kubelet:
	W0327 16:46:22.592349    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:46:22.592352    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:46:22.592356    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:46:22.592359    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:46:26.435996    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:46:26.436040    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:46:31.436898    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:46:31.436934    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:46:32.595506    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:46:36.437442    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:46:36.437474    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:46:37.597609    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:46:37.597727    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:46:37.610686    8959 logs.go:276] 2 containers: [aa3afc402172 e20a2e974eba]
	I0327 16:46:37.610764    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:46:37.627111    8959 logs.go:276] 2 containers: [d74fd36a706c 56ea780761c8]
	I0327 16:46:37.627203    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:46:37.638093    8959 logs.go:276] 1 containers: [25c0222d62b6]
	I0327 16:46:37.638155    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:46:37.648924    8959 logs.go:276] 2 containers: [62111993cdda c581a3f09ed3]
	I0327 16:46:37.648987    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:46:37.659345    8959 logs.go:276] 1 containers: [0dc2c64bef5d]
	I0327 16:46:37.659427    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:46:37.670444    8959 logs.go:276] 2 containers: [fa00948ec909 f76badbaa6c8]
	I0327 16:46:37.670527    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:46:37.680966    8959 logs.go:276] 0 containers: []
	W0327 16:46:37.680976    8959 logs.go:278] No container was found matching "kindnet"
	I0327 16:46:37.681029    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:46:37.691827    8959 logs.go:276] 2 containers: [396dc8c2621b ffd8ac20b899]
	I0327 16:46:37.691847    8959 logs.go:123] Gathering logs for kubelet ...
	I0327 16:46:37.691853    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0327 16:46:37.729431    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:46:37.729527    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:46:37.730587    8959 logs.go:123] Gathering logs for coredns [25c0222d62b6] ...
	I0327 16:46:37.730594    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25c0222d62b6"
	I0327 16:46:37.742190    8959 logs.go:123] Gathering logs for Docker ...
	I0327 16:46:37.742200    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:46:37.766778    8959 logs.go:123] Gathering logs for storage-provisioner [ffd8ac20b899] ...
	I0327 16:46:37.766786    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffd8ac20b899"
	I0327 16:46:37.780995    8959 logs.go:123] Gathering logs for kube-apiserver [aa3afc402172] ...
	I0327 16:46:37.781011    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa3afc402172"
	I0327 16:46:37.795271    8959 logs.go:123] Gathering logs for kube-controller-manager [fa00948ec909] ...
	I0327 16:46:37.795281    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa00948ec909"
	I0327 16:46:37.812991    8959 logs.go:123] Gathering logs for kube-controller-manager [f76badbaa6c8] ...
	I0327 16:46:37.813003    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76badbaa6c8"
	I0327 16:46:37.827181    8959 logs.go:123] Gathering logs for storage-provisioner [396dc8c2621b] ...
	I0327 16:46:37.827192    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396dc8c2621b"
	I0327 16:46:37.838687    8959 logs.go:123] Gathering logs for kube-scheduler [c581a3f09ed3] ...
	I0327 16:46:37.838702    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c581a3f09ed3"
	I0327 16:46:37.853441    8959 logs.go:123] Gathering logs for kube-proxy [0dc2c64bef5d] ...
	I0327 16:46:37.853451    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc2c64bef5d"
	I0327 16:46:37.865120    8959 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:46:37.865131    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:46:37.899621    8959 logs.go:123] Gathering logs for kube-apiserver [e20a2e974eba] ...
	I0327 16:46:37.899632    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e20a2e974eba"
	I0327 16:46:37.941472    8959 logs.go:123] Gathering logs for etcd [d74fd36a706c] ...
	I0327 16:46:37.941484    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74fd36a706c"
	I0327 16:46:37.955184    8959 logs.go:123] Gathering logs for kube-scheduler [62111993cdda] ...
	I0327 16:46:37.955193    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62111993cdda"
	I0327 16:46:37.967025    8959 logs.go:123] Gathering logs for dmesg ...
	I0327 16:46:37.967035    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:46:37.970965    8959 logs.go:123] Gathering logs for etcd [56ea780761c8] ...
	I0327 16:46:37.970974    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ea780761c8"
	I0327 16:46:37.985960    8959 logs.go:123] Gathering logs for container status ...
	I0327 16:46:37.985971    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:46:37.997584    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:46:37.997595    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 16:46:37.997621    8959 out.go:239] X Problems detected in kubelet:
	W0327 16:46:37.997625    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:46:37.997629    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:46:37.997633    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:46:37.997635    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:46:41.438786    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:46:41.438832    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:46:46.440664    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:46:46.440769    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:46:46.469825    8800 logs.go:276] 1 containers: [a8b68fa373a2]
	I0327 16:46:46.469901    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:46:46.480706    8800 logs.go:276] 1 containers: [b2b36dcbf471]
	I0327 16:46:46.480783    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:46:46.491410    8800 logs.go:276] 2 containers: [a1738554adca b5329cb28332]
	I0327 16:46:46.491483    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:46:46.502307    8800 logs.go:276] 1 containers: [5aa5f8fb90cc]
	I0327 16:46:46.502372    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:46:46.513008    8800 logs.go:276] 1 containers: [4195d96c1f8a]
	I0327 16:46:46.513079    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:46:46.524245    8800 logs.go:276] 1 containers: [faf151c9cff5]
	I0327 16:46:46.524305    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:46:46.534292    8800 logs.go:276] 0 containers: []
	W0327 16:46:46.534301    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:46:46.534351    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:46:46.544748    8800 logs.go:276] 1 containers: [2c4ccf3e69ae]
	I0327 16:46:46.544767    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:46:46.544771    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:46:46.549678    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:46:46.549687    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:46:46.586858    8800 logs.go:123] Gathering logs for kube-apiserver [a8b68fa373a2] ...
	I0327 16:46:46.586877    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b68fa373a2"
	I0327 16:46:46.601846    8800 logs.go:123] Gathering logs for etcd [b2b36dcbf471] ...
	I0327 16:46:46.601861    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2b36dcbf471"
	I0327 16:46:46.616587    8800 logs.go:123] Gathering logs for coredns [b5329cb28332] ...
	I0327 16:46:46.616597    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5329cb28332"
	I0327 16:46:46.628426    8800 logs.go:123] Gathering logs for kube-scheduler [5aa5f8fb90cc] ...
	I0327 16:46:46.628435    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa5f8fb90cc"
	I0327 16:46:46.642676    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:46:46.642689    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:46:46.654217    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:46:46.654228    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:46:46.688536    8800 logs.go:123] Gathering logs for coredns [a1738554adca] ...
	I0327 16:46:46.688544    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1738554adca"
	I0327 16:46:46.703398    8800 logs.go:123] Gathering logs for kube-proxy [4195d96c1f8a] ...
	I0327 16:46:46.703409    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4195d96c1f8a"
	I0327 16:46:46.715024    8800 logs.go:123] Gathering logs for kube-controller-manager [faf151c9cff5] ...
	I0327 16:46:46.715036    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf151c9cff5"
	I0327 16:46:46.732459    8800 logs.go:123] Gathering logs for storage-provisioner [2c4ccf3e69ae] ...
	I0327 16:46:46.732476    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4ccf3e69ae"
	I0327 16:46:46.747358    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:46:46.747368    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:46:48.001454    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:46:49.273734    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:46:53.003861    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:46:53.004122    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:46:53.026691    8959 logs.go:276] 2 containers: [aa3afc402172 e20a2e974eba]
	I0327 16:46:53.026808    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:46:53.042611    8959 logs.go:276] 2 containers: [d74fd36a706c 56ea780761c8]
	I0327 16:46:53.042708    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:46:53.055789    8959 logs.go:276] 1 containers: [25c0222d62b6]
	I0327 16:46:53.055859    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:46:53.067196    8959 logs.go:276] 2 containers: [62111993cdda c581a3f09ed3]
	I0327 16:46:53.067268    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:46:53.077749    8959 logs.go:276] 1 containers: [0dc2c64bef5d]
	I0327 16:46:53.077828    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:46:53.089692    8959 logs.go:276] 2 containers: [fa00948ec909 f76badbaa6c8]
	I0327 16:46:53.089765    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:46:53.100023    8959 logs.go:276] 0 containers: []
	W0327 16:46:53.100034    8959 logs.go:278] No container was found matching "kindnet"
	I0327 16:46:53.100088    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:46:53.110058    8959 logs.go:276] 2 containers: [396dc8c2621b ffd8ac20b899]
	I0327 16:46:53.110077    8959 logs.go:123] Gathering logs for dmesg ...
	I0327 16:46:53.110082    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:46:53.114188    8959 logs.go:123] Gathering logs for kube-apiserver [aa3afc402172] ...
	I0327 16:46:53.114194    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa3afc402172"
	I0327 16:46:53.128688    8959 logs.go:123] Gathering logs for kube-scheduler [62111993cdda] ...
	I0327 16:46:53.128699    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62111993cdda"
	I0327 16:46:53.140095    8959 logs.go:123] Gathering logs for storage-provisioner [ffd8ac20b899] ...
	I0327 16:46:53.140105    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffd8ac20b899"
	I0327 16:46:53.155103    8959 logs.go:123] Gathering logs for Docker ...
	I0327 16:46:53.155116    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:46:53.179655    8959 logs.go:123] Gathering logs for etcd [d74fd36a706c] ...
	I0327 16:46:53.179664    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74fd36a706c"
	I0327 16:46:53.194724    8959 logs.go:123] Gathering logs for kube-scheduler [c581a3f09ed3] ...
	I0327 16:46:53.194735    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c581a3f09ed3"
	I0327 16:46:53.210060    8959 logs.go:123] Gathering logs for kube-proxy [0dc2c64bef5d] ...
	I0327 16:46:53.210072    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc2c64bef5d"
	I0327 16:46:53.222208    8959 logs.go:123] Gathering logs for kube-controller-manager [fa00948ec909] ...
	I0327 16:46:53.222218    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa00948ec909"
	I0327 16:46:53.240014    8959 logs.go:123] Gathering logs for kubelet ...
	I0327 16:46:53.240024    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0327 16:46:53.276744    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:46:53.276838    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:46:53.277903    8959 logs.go:123] Gathering logs for coredns [25c0222d62b6] ...
	I0327 16:46:53.277907    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25c0222d62b6"
	I0327 16:46:53.289018    8959 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:46:53.289029    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:46:53.325072    8959 logs.go:123] Gathering logs for kube-apiserver [e20a2e974eba] ...
	I0327 16:46:53.325083    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e20a2e974eba"
	I0327 16:46:53.367904    8959 logs.go:123] Gathering logs for etcd [56ea780761c8] ...
	I0327 16:46:53.367914    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ea780761c8"
	I0327 16:46:53.382206    8959 logs.go:123] Gathering logs for kube-controller-manager [f76badbaa6c8] ...
	I0327 16:46:53.382217    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76badbaa6c8"
	I0327 16:46:53.395471    8959 logs.go:123] Gathering logs for storage-provisioner [396dc8c2621b] ...
	I0327 16:46:53.395481    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396dc8c2621b"
	I0327 16:46:53.406840    8959 logs.go:123] Gathering logs for container status ...
	I0327 16:46:53.406853    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:46:53.419047    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:46:53.419059    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 16:46:53.419088    8959 out.go:239] X Problems detected in kubelet:
	W0327 16:46:53.419094    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:46:53.419097    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:46:53.419101    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:46:53.419104    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:46:54.276159    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:46:54.276334    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:46:54.295758    8800 logs.go:276] 1 containers: [a8b68fa373a2]
	I0327 16:46:54.295849    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:46:54.314251    8800 logs.go:276] 1 containers: [b2b36dcbf471]
	I0327 16:46:54.314330    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:46:54.325861    8800 logs.go:276] 2 containers: [a1738554adca b5329cb28332]
	I0327 16:46:54.325930    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:46:54.342445    8800 logs.go:276] 1 containers: [5aa5f8fb90cc]
	I0327 16:46:54.342510    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:46:54.353408    8800 logs.go:276] 1 containers: [4195d96c1f8a]
	I0327 16:46:54.353496    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:46:54.364202    8800 logs.go:276] 1 containers: [faf151c9cff5]
	I0327 16:46:54.364274    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:46:54.374788    8800 logs.go:276] 0 containers: []
	W0327 16:46:54.374799    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:46:54.374859    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:46:54.385190    8800 logs.go:276] 1 containers: [2c4ccf3e69ae]
	I0327 16:46:54.385203    8800 logs.go:123] Gathering logs for coredns [b5329cb28332] ...
	I0327 16:46:54.385209    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5329cb28332"
	I0327 16:46:54.397135    8800 logs.go:123] Gathering logs for kube-controller-manager [faf151c9cff5] ...
	I0327 16:46:54.397148    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf151c9cff5"
	I0327 16:46:54.414700    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:46:54.414711    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:46:54.439166    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:46:54.439174    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:46:54.451212    8800 logs.go:123] Gathering logs for kube-apiserver [a8b68fa373a2] ...
	I0327 16:46:54.451221    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b68fa373a2"
	I0327 16:46:54.465675    8800 logs.go:123] Gathering logs for etcd [b2b36dcbf471] ...
	I0327 16:46:54.465686    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2b36dcbf471"
	I0327 16:46:54.479603    8800 logs.go:123] Gathering logs for coredns [a1738554adca] ...
	I0327 16:46:54.479615    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1738554adca"
	I0327 16:46:54.491905    8800 logs.go:123] Gathering logs for kube-scheduler [5aa5f8fb90cc] ...
	I0327 16:46:54.491917    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa5f8fb90cc"
	I0327 16:46:54.506952    8800 logs.go:123] Gathering logs for kube-proxy [4195d96c1f8a] ...
	I0327 16:46:54.506967    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4195d96c1f8a"
	I0327 16:46:54.520621    8800 logs.go:123] Gathering logs for storage-provisioner [2c4ccf3e69ae] ...
	I0327 16:46:54.520631    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4ccf3e69ae"
	I0327 16:46:54.535120    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:46:54.535131    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:46:54.571106    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:46:54.571119    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:46:54.575665    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:46:54.575674    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:46:57.114178    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:47:02.115198    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:47:02.115481    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:47:02.145718    8800 logs.go:276] 1 containers: [a8b68fa373a2]
	I0327 16:47:02.145835    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:47:02.161724    8800 logs.go:276] 1 containers: [b2b36dcbf471]
	I0327 16:47:02.161810    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:47:02.174535    8800 logs.go:276] 2 containers: [a1738554adca b5329cb28332]
	I0327 16:47:02.174607    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:47:02.186176    8800 logs.go:276] 1 containers: [5aa5f8fb90cc]
	I0327 16:47:02.186242    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:47:02.196737    8800 logs.go:276] 1 containers: [4195d96c1f8a]
	I0327 16:47:02.196800    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:47:02.206966    8800 logs.go:276] 1 containers: [faf151c9cff5]
	I0327 16:47:02.207036    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:47:02.216967    8800 logs.go:276] 0 containers: []
	W0327 16:47:02.216981    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:47:02.217047    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:47:02.228020    8800 logs.go:276] 1 containers: [2c4ccf3e69ae]
	I0327 16:47:02.228036    8800 logs.go:123] Gathering logs for coredns [b5329cb28332] ...
	I0327 16:47:02.228041    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5329cb28332"
	I0327 16:47:02.239602    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:47:02.239617    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:47:02.263540    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:47:02.263547    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:47:02.275010    8800 logs.go:123] Gathering logs for etcd [b2b36dcbf471] ...
	I0327 16:47:02.275022    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2b36dcbf471"
	I0327 16:47:02.288728    8800 logs.go:123] Gathering logs for coredns [a1738554adca] ...
	I0327 16:47:02.288739    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1738554adca"
	I0327 16:47:02.300422    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:47:02.300434    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:47:02.337378    8800 logs.go:123] Gathering logs for kube-apiserver [a8b68fa373a2] ...
	I0327 16:47:02.337388    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b68fa373a2"
	I0327 16:47:02.351890    8800 logs.go:123] Gathering logs for kube-scheduler [5aa5f8fb90cc] ...
	I0327 16:47:02.351902    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa5f8fb90cc"
	I0327 16:47:02.366095    8800 logs.go:123] Gathering logs for kube-proxy [4195d96c1f8a] ...
	I0327 16:47:02.366104    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4195d96c1f8a"
	I0327 16:47:02.378108    8800 logs.go:123] Gathering logs for kube-controller-manager [faf151c9cff5] ...
	I0327 16:47:02.378121    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf151c9cff5"
	I0327 16:47:02.395177    8800 logs.go:123] Gathering logs for storage-provisioner [2c4ccf3e69ae] ...
	I0327 16:47:02.395187    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4ccf3e69ae"
	I0327 16:47:02.406870    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:47:02.406883    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:47:02.439913    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:47:02.439926    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:47:03.421060    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:47:04.946782    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:47:08.423461    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:47:08.423815    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:47:09.947410    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:47:09.947605    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:47:09.970760    8800 logs.go:276] 1 containers: [a8b68fa373a2]
	I0327 16:47:09.970880    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:47:09.986958    8800 logs.go:276] 1 containers: [b2b36dcbf471]
	I0327 16:47:09.987042    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:47:10.000609    8800 logs.go:276] 2 containers: [a1738554adca b5329cb28332]
	I0327 16:47:10.000681    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:47:10.014171    8800 logs.go:276] 1 containers: [5aa5f8fb90cc]
	I0327 16:47:10.014238    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:47:10.024368    8800 logs.go:276] 1 containers: [4195d96c1f8a]
	I0327 16:47:10.024429    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:47:10.035355    8800 logs.go:276] 1 containers: [faf151c9cff5]
	I0327 16:47:10.035421    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:47:10.045543    8800 logs.go:276] 0 containers: []
	W0327 16:47:10.045553    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:47:10.045601    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:47:10.056097    8800 logs.go:276] 1 containers: [2c4ccf3e69ae]
	I0327 16:47:10.056113    8800 logs.go:123] Gathering logs for storage-provisioner [2c4ccf3e69ae] ...
	I0327 16:47:10.056118    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4ccf3e69ae"
	I0327 16:47:10.067743    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:47:10.067753    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:47:10.079121    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:47:10.079133    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:47:10.112553    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:47:10.112561    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:47:10.152231    8800 logs.go:123] Gathering logs for kube-apiserver [a8b68fa373a2] ...
	I0327 16:47:10.152243    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b68fa373a2"
	I0327 16:47:10.166817    8800 logs.go:123] Gathering logs for etcd [b2b36dcbf471] ...
	I0327 16:47:10.166829    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2b36dcbf471"
	I0327 16:47:10.180394    8800 logs.go:123] Gathering logs for coredns [a1738554adca] ...
	I0327 16:47:10.180404    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1738554adca"
	I0327 16:47:10.192268    8800 logs.go:123] Gathering logs for kube-controller-manager [faf151c9cff5] ...
	I0327 16:47:10.192278    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf151c9cff5"
	I0327 16:47:10.210194    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:47:10.210205    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:47:10.214797    8800 logs.go:123] Gathering logs for coredns [b5329cb28332] ...
	I0327 16:47:10.214803    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5329cb28332"
	I0327 16:47:10.226211    8800 logs.go:123] Gathering logs for kube-scheduler [5aa5f8fb90cc] ...
	I0327 16:47:10.226222    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa5f8fb90cc"
	I0327 16:47:10.240529    8800 logs.go:123] Gathering logs for kube-proxy [4195d96c1f8a] ...
	I0327 16:47:10.240542    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4195d96c1f8a"
	I0327 16:47:10.251992    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:47:10.252002    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:47:12.778068    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:47:08.456120    8959 logs.go:276] 2 containers: [aa3afc402172 e20a2e974eba]
	I0327 16:47:08.456258    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:47:08.474893    8959 logs.go:276] 2 containers: [d74fd36a706c 56ea780761c8]
	I0327 16:47:08.474980    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:47:08.489575    8959 logs.go:276] 1 containers: [25c0222d62b6]
	I0327 16:47:08.489655    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:47:08.501606    8959 logs.go:276] 2 containers: [62111993cdda c581a3f09ed3]
	I0327 16:47:08.501678    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:47:08.512208    8959 logs.go:276] 1 containers: [0dc2c64bef5d]
	I0327 16:47:08.512276    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:47:08.522777    8959 logs.go:276] 2 containers: [fa00948ec909 f76badbaa6c8]
	I0327 16:47:08.522853    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:47:08.532503    8959 logs.go:276] 0 containers: []
	W0327 16:47:08.532517    8959 logs.go:278] No container was found matching "kindnet"
	I0327 16:47:08.532582    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:47:08.543834    8959 logs.go:276] 2 containers: [396dc8c2621b ffd8ac20b899]
	I0327 16:47:08.543852    8959 logs.go:123] Gathering logs for kube-controller-manager [fa00948ec909] ...
	I0327 16:47:08.543858    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa00948ec909"
	I0327 16:47:08.561576    8959 logs.go:123] Gathering logs for storage-provisioner [396dc8c2621b] ...
	I0327 16:47:08.561589    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396dc8c2621b"
	I0327 16:47:08.572791    8959 logs.go:123] Gathering logs for storage-provisioner [ffd8ac20b899] ...
	I0327 16:47:08.572804    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffd8ac20b899"
	I0327 16:47:08.587109    8959 logs.go:123] Gathering logs for etcd [56ea780761c8] ...
	I0327 16:47:08.587122    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ea780761c8"
	I0327 16:47:08.601926    8959 logs.go:123] Gathering logs for kube-scheduler [62111993cdda] ...
	I0327 16:47:08.601943    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62111993cdda"
	I0327 16:47:08.614609    8959 logs.go:123] Gathering logs for container status ...
	I0327 16:47:08.614622    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:47:08.626888    8959 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:47:08.626902    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:47:08.663615    8959 logs.go:123] Gathering logs for kube-scheduler [c581a3f09ed3] ...
	I0327 16:47:08.663625    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c581a3f09ed3"
	I0327 16:47:08.680973    8959 logs.go:123] Gathering logs for etcd [d74fd36a706c] ...
	I0327 16:47:08.680984    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74fd36a706c"
	I0327 16:47:08.694354    8959 logs.go:123] Gathering logs for kube-proxy [0dc2c64bef5d] ...
	I0327 16:47:08.694365    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc2c64bef5d"
	I0327 16:47:08.705867    8959 logs.go:123] Gathering logs for kube-apiserver [aa3afc402172] ...
	I0327 16:47:08.705877    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa3afc402172"
	I0327 16:47:08.719483    8959 logs.go:123] Gathering logs for kube-apiserver [e20a2e974eba] ...
	I0327 16:47:08.719494    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e20a2e974eba"
	I0327 16:47:08.770044    8959 logs.go:123] Gathering logs for coredns [25c0222d62b6] ...
	I0327 16:47:08.770054    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25c0222d62b6"
	I0327 16:47:08.781980    8959 logs.go:123] Gathering logs for kube-controller-manager [f76badbaa6c8] ...
	I0327 16:47:08.781992    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76badbaa6c8"
	I0327 16:47:08.795118    8959 logs.go:123] Gathering logs for Docker ...
	I0327 16:47:08.795128    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:47:08.820595    8959 logs.go:123] Gathering logs for kubelet ...
	I0327 16:47:08.820611    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0327 16:47:08.858534    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:47:08.858633    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:47:08.859664    8959 logs.go:123] Gathering logs for dmesg ...
	I0327 16:47:08.859669    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:47:08.864014    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:47:08.864021    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 16:47:08.864048    8959 out.go:239] X Problems detected in kubelet:
	W0327 16:47:08.864053    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:47:08.864056    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:47:08.864061    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:47:08.864063    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:47:17.780190    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:47:17.780356    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:47:17.796369    8800 logs.go:276] 1 containers: [a8b68fa373a2]
	I0327 16:47:17.796462    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:47:17.809195    8800 logs.go:276] 1 containers: [b2b36dcbf471]
	I0327 16:47:17.809267    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:47:17.820185    8800 logs.go:276] 2 containers: [a1738554adca b5329cb28332]
	I0327 16:47:17.820261    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:47:17.830876    8800 logs.go:276] 1 containers: [5aa5f8fb90cc]
	I0327 16:47:17.830941    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:47:17.841281    8800 logs.go:276] 1 containers: [4195d96c1f8a]
	I0327 16:47:17.841351    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:47:17.852254    8800 logs.go:276] 1 containers: [faf151c9cff5]
	I0327 16:47:17.852324    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:47:17.864779    8800 logs.go:276] 0 containers: []
	W0327 16:47:17.864792    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:47:17.864852    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:47:17.875637    8800 logs.go:276] 1 containers: [2c4ccf3e69ae]
	I0327 16:47:17.875652    8800 logs.go:123] Gathering logs for kube-scheduler [5aa5f8fb90cc] ...
	I0327 16:47:17.875657    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa5f8fb90cc"
	I0327 16:47:17.890006    8800 logs.go:123] Gathering logs for kube-proxy [4195d96c1f8a] ...
	I0327 16:47:17.890017    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4195d96c1f8a"
	I0327 16:47:17.901560    8800 logs.go:123] Gathering logs for storage-provisioner [2c4ccf3e69ae] ...
	I0327 16:47:17.901571    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4ccf3e69ae"
	I0327 16:47:17.913464    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:47:17.913475    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:47:17.937644    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:47:17.937656    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:47:17.949387    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:47:17.949397    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:47:17.983604    8800 logs.go:123] Gathering logs for kube-apiserver [a8b68fa373a2] ...
	I0327 16:47:17.983615    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b68fa373a2"
	I0327 16:47:17.997970    8800 logs.go:123] Gathering logs for etcd [b2b36dcbf471] ...
	I0327 16:47:17.997980    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2b36dcbf471"
	I0327 16:47:18.013860    8800 logs.go:123] Gathering logs for coredns [b5329cb28332] ...
	I0327 16:47:18.013873    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5329cb28332"
	I0327 16:47:18.024915    8800 logs.go:123] Gathering logs for kube-controller-manager [faf151c9cff5] ...
	I0327 16:47:18.024927    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf151c9cff5"
	I0327 16:47:18.047410    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:47:18.047422    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:47:18.080346    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:47:18.080354    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:47:18.084838    8800 logs.go:123] Gathering logs for coredns [a1738554adca] ...
	I0327 16:47:18.084844    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1738554adca"
	I0327 16:47:20.598233    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:47:18.866440    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:47:25.600439    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:47:25.600574    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:47:25.613506    8800 logs.go:276] 1 containers: [a8b68fa373a2]
	I0327 16:47:25.613579    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:47:25.623781    8800 logs.go:276] 1 containers: [b2b36dcbf471]
	I0327 16:47:25.623847    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:47:25.634207    8800 logs.go:276] 2 containers: [a1738554adca b5329cb28332]
	I0327 16:47:25.634275    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:47:25.644814    8800 logs.go:276] 1 containers: [5aa5f8fb90cc]
	I0327 16:47:25.644879    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:47:25.655311    8800 logs.go:276] 1 containers: [4195d96c1f8a]
	I0327 16:47:25.655378    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:47:25.668449    8800 logs.go:276] 1 containers: [faf151c9cff5]
	I0327 16:47:25.668515    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:47:25.678659    8800 logs.go:276] 0 containers: []
	W0327 16:47:25.678672    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:47:25.678729    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:47:25.688681    8800 logs.go:276] 1 containers: [2c4ccf3e69ae]
	I0327 16:47:25.688697    8800 logs.go:123] Gathering logs for kube-scheduler [5aa5f8fb90cc] ...
	I0327 16:47:25.688703    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa5f8fb90cc"
	I0327 16:47:25.703273    8800 logs.go:123] Gathering logs for storage-provisioner [2c4ccf3e69ae] ...
	I0327 16:47:25.703285    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4ccf3e69ae"
	I0327 16:47:25.715331    8800 logs.go:123] Gathering logs for coredns [b5329cb28332] ...
	I0327 16:47:25.715343    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5329cb28332"
	I0327 16:47:25.730140    8800 logs.go:123] Gathering logs for kube-proxy [4195d96c1f8a] ...
	I0327 16:47:25.730152    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4195d96c1f8a"
	I0327 16:47:25.743150    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:47:25.743165    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:47:25.778188    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:47:25.778196    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:47:25.782547    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:47:25.782556    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:47:25.853381    8800 logs.go:123] Gathering logs for kube-apiserver [a8b68fa373a2] ...
	I0327 16:47:25.853392    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b68fa373a2"
	I0327 16:47:25.867864    8800 logs.go:123] Gathering logs for etcd [b2b36dcbf471] ...
	I0327 16:47:25.867875    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2b36dcbf471"
	I0327 16:47:25.882144    8800 logs.go:123] Gathering logs for coredns [a1738554adca] ...
	I0327 16:47:25.882154    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1738554adca"
	I0327 16:47:25.893531    8800 logs.go:123] Gathering logs for kube-controller-manager [faf151c9cff5] ...
	I0327 16:47:25.893540    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf151c9cff5"
	I0327 16:47:25.911093    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:47:25.911102    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:47:25.937597    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:47:25.937606    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:47:23.868607    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:47:23.868812    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:47:23.880616    8959 logs.go:276] 2 containers: [aa3afc402172 e20a2e974eba]
	I0327 16:47:23.880702    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:47:23.891331    8959 logs.go:276] 2 containers: [d74fd36a706c 56ea780761c8]
	I0327 16:47:23.891395    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:47:23.901870    8959 logs.go:276] 1 containers: [25c0222d62b6]
	I0327 16:47:23.901937    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:47:23.913198    8959 logs.go:276] 2 containers: [62111993cdda c581a3f09ed3]
	I0327 16:47:23.913274    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:47:23.924129    8959 logs.go:276] 1 containers: [0dc2c64bef5d]
	I0327 16:47:23.924200    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:47:23.935179    8959 logs.go:276] 2 containers: [fa00948ec909 f76badbaa6c8]
	I0327 16:47:23.935257    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:47:23.951255    8959 logs.go:276] 0 containers: []
	W0327 16:47:23.951268    8959 logs.go:278] No container was found matching "kindnet"
	I0327 16:47:23.951326    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:47:23.962256    8959 logs.go:276] 2 containers: [396dc8c2621b ffd8ac20b899]
	I0327 16:47:23.962277    8959 logs.go:123] Gathering logs for coredns [25c0222d62b6] ...
	I0327 16:47:23.962284    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25c0222d62b6"
	I0327 16:47:23.973765    8959 logs.go:123] Gathering logs for kube-proxy [0dc2c64bef5d] ...
	I0327 16:47:23.973776    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc2c64bef5d"
	I0327 16:47:23.985545    8959 logs.go:123] Gathering logs for kube-controller-manager [f76badbaa6c8] ...
	I0327 16:47:23.985560    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76badbaa6c8"
	I0327 16:47:23.999111    8959 logs.go:123] Gathering logs for Docker ...
	I0327 16:47:23.999121    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:47:24.023067    8959 logs.go:123] Gathering logs for kubelet ...
	I0327 16:47:24.023077    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0327 16:47:24.058168    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:47:24.058261    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:47:24.059309    8959 logs.go:123] Gathering logs for dmesg ...
	I0327 16:47:24.059318    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:47:24.063804    8959 logs.go:123] Gathering logs for etcd [56ea780761c8] ...
	I0327 16:47:24.063810    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ea780761c8"
	I0327 16:47:24.078443    8959 logs.go:123] Gathering logs for kube-apiserver [aa3afc402172] ...
	I0327 16:47:24.078454    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa3afc402172"
	I0327 16:47:24.094055    8959 logs.go:123] Gathering logs for kube-scheduler [c581a3f09ed3] ...
	I0327 16:47:24.094067    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c581a3f09ed3"
	I0327 16:47:24.109126    8959 logs.go:123] Gathering logs for storage-provisioner [396dc8c2621b] ...
	I0327 16:47:24.109135    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396dc8c2621b"
	I0327 16:47:24.120728    8959 logs.go:123] Gathering logs for kube-scheduler [62111993cdda] ...
	I0327 16:47:24.120740    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62111993cdda"
	I0327 16:47:24.132642    8959 logs.go:123] Gathering logs for kube-controller-manager [fa00948ec909] ...
	I0327 16:47:24.132654    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa00948ec909"
	I0327 16:47:24.151061    8959 logs.go:123] Gathering logs for storage-provisioner [ffd8ac20b899] ...
	I0327 16:47:24.151073    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffd8ac20b899"
	I0327 16:47:24.165193    8959 logs.go:123] Gathering logs for container status ...
	I0327 16:47:24.165207    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:47:24.177109    8959 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:47:24.177118    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:47:24.213481    8959 logs.go:123] Gathering logs for kube-apiserver [e20a2e974eba] ...
	I0327 16:47:24.213491    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e20a2e974eba"
	I0327 16:47:24.259105    8959 logs.go:123] Gathering logs for etcd [d74fd36a706c] ...
	I0327 16:47:24.259116    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74fd36a706c"
	I0327 16:47:24.273047    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:47:24.273059    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 16:47:24.273082    8959 out.go:239] X Problems detected in kubelet:
	W0327 16:47:24.273085    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:47:24.273089    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:47:24.273092    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:47:24.273095    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:47:28.450853    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:47:33.451193    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:47:33.451569    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:47:33.486125    8800 logs.go:276] 1 containers: [a8b68fa373a2]
	I0327 16:47:33.486267    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:47:33.507698    8800 logs.go:276] 1 containers: [b2b36dcbf471]
	I0327 16:47:33.507814    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:47:33.522756    8800 logs.go:276] 2 containers: [a1738554adca b5329cb28332]
	I0327 16:47:33.522852    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:47:33.535131    8800 logs.go:276] 1 containers: [5aa5f8fb90cc]
	I0327 16:47:33.535203    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:47:33.545641    8800 logs.go:276] 1 containers: [4195d96c1f8a]
	I0327 16:47:33.545709    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:47:33.556093    8800 logs.go:276] 1 containers: [faf151c9cff5]
	I0327 16:47:33.556159    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:47:33.566470    8800 logs.go:276] 0 containers: []
	W0327 16:47:33.566481    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:47:33.566537    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:47:33.577045    8800 logs.go:276] 1 containers: [2c4ccf3e69ae]
	I0327 16:47:33.577061    8800 logs.go:123] Gathering logs for kube-scheduler [5aa5f8fb90cc] ...
	I0327 16:47:33.577066    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa5f8fb90cc"
	I0327 16:47:33.591423    8800 logs.go:123] Gathering logs for kube-proxy [4195d96c1f8a] ...
	I0327 16:47:33.591434    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4195d96c1f8a"
	I0327 16:47:33.602994    8800 logs.go:123] Gathering logs for kube-controller-manager [faf151c9cff5] ...
	I0327 16:47:33.603005    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf151c9cff5"
	I0327 16:47:33.620407    8800 logs.go:123] Gathering logs for storage-provisioner [2c4ccf3e69ae] ...
	I0327 16:47:33.620417    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4ccf3e69ae"
	I0327 16:47:33.636096    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:47:33.636105    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:47:33.648973    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:47:33.648984    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:47:33.653741    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:47:33.653751    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:47:33.690725    8800 logs.go:123] Gathering logs for etcd [b2b36dcbf471] ...
	I0327 16:47:33.690735    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2b36dcbf471"
	I0327 16:47:33.704434    8800 logs.go:123] Gathering logs for coredns [a1738554adca] ...
	I0327 16:47:33.704444    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1738554adca"
	I0327 16:47:33.716061    8800 logs.go:123] Gathering logs for coredns [b5329cb28332] ...
	I0327 16:47:33.716072    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5329cb28332"
	I0327 16:47:33.728166    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:47:33.728177    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:47:33.751980    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:47:33.751993    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:47:33.785571    8800 logs.go:123] Gathering logs for kube-apiserver [a8b68fa373a2] ...
	I0327 16:47:33.785584    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b68fa373a2"
	I0327 16:47:36.301481    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:47:34.276040    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:47:41.303590    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:47:41.303723    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:47:41.315141    8800 logs.go:276] 1 containers: [a8b68fa373a2]
	I0327 16:47:41.315220    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:47:41.326183    8800 logs.go:276] 1 containers: [b2b36dcbf471]
	I0327 16:47:41.326253    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:47:41.337350    8800 logs.go:276] 2 containers: [a1738554adca b5329cb28332]
	I0327 16:47:41.337417    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:47:41.347978    8800 logs.go:276] 1 containers: [5aa5f8fb90cc]
	I0327 16:47:41.348044    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:47:41.358877    8800 logs.go:276] 1 containers: [4195d96c1f8a]
	I0327 16:47:41.358943    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:47:41.369905    8800 logs.go:276] 1 containers: [faf151c9cff5]
	I0327 16:47:41.369971    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:47:41.379714    8800 logs.go:276] 0 containers: []
	W0327 16:47:41.379727    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:47:41.379784    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:47:41.390768    8800 logs.go:276] 1 containers: [2c4ccf3e69ae]
	I0327 16:47:41.390787    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:47:41.390792    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:47:41.426296    8800 logs.go:123] Gathering logs for kube-apiserver [a8b68fa373a2] ...
	I0327 16:47:41.426307    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b68fa373a2"
	I0327 16:47:41.441414    8800 logs.go:123] Gathering logs for coredns [b5329cb28332] ...
	I0327 16:47:41.441427    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5329cb28332"
	I0327 16:47:41.453151    8800 logs.go:123] Gathering logs for kube-scheduler [5aa5f8fb90cc] ...
	I0327 16:47:41.453161    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa5f8fb90cc"
	I0327 16:47:41.468106    8800 logs.go:123] Gathering logs for kube-proxy [4195d96c1f8a] ...
	I0327 16:47:41.468119    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4195d96c1f8a"
	I0327 16:47:41.479602    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:47:41.479613    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:47:41.504790    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:47:41.504799    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:47:41.517846    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:47:41.517856    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:47:41.523061    8800 logs.go:123] Gathering logs for etcd [b2b36dcbf471] ...
	I0327 16:47:41.523067    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2b36dcbf471"
	I0327 16:47:41.537191    8800 logs.go:123] Gathering logs for coredns [a1738554adca] ...
	I0327 16:47:41.537200    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1738554adca"
	I0327 16:47:41.548768    8800 logs.go:123] Gathering logs for kube-controller-manager [faf151c9cff5] ...
	I0327 16:47:41.548777    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf151c9cff5"
	I0327 16:47:41.566822    8800 logs.go:123] Gathering logs for storage-provisioner [2c4ccf3e69ae] ...
	I0327 16:47:41.566835    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4ccf3e69ae"
	I0327 16:47:41.578404    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:47:41.578414    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:47:39.278214    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:47:39.278469    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:47:39.302752    8959 logs.go:276] 2 containers: [aa3afc402172 e20a2e974eba]
	I0327 16:47:39.302856    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:47:39.319571    8959 logs.go:276] 2 containers: [d74fd36a706c 56ea780761c8]
	I0327 16:47:39.319667    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:47:39.335204    8959 logs.go:276] 1 containers: [25c0222d62b6]
	I0327 16:47:39.335278    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:47:39.346646    8959 logs.go:276] 2 containers: [62111993cdda c581a3f09ed3]
	I0327 16:47:39.346718    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:47:39.356821    8959 logs.go:276] 1 containers: [0dc2c64bef5d]
	I0327 16:47:39.356887    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:47:39.368070    8959 logs.go:276] 2 containers: [fa00948ec909 f76badbaa6c8]
	I0327 16:47:39.368137    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:47:39.378297    8959 logs.go:276] 0 containers: []
	W0327 16:47:39.378308    8959 logs.go:278] No container was found matching "kindnet"
	I0327 16:47:39.378365    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:47:39.388629    8959 logs.go:276] 2 containers: [396dc8c2621b ffd8ac20b899]
	I0327 16:47:39.388664    8959 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:47:39.388672    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:47:39.426239    8959 logs.go:123] Gathering logs for etcd [d74fd36a706c] ...
	I0327 16:47:39.426251    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74fd36a706c"
	I0327 16:47:39.439705    8959 logs.go:123] Gathering logs for kube-scheduler [62111993cdda] ...
	I0327 16:47:39.439715    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62111993cdda"
	I0327 16:47:39.451412    8959 logs.go:123] Gathering logs for storage-provisioner [ffd8ac20b899] ...
	I0327 16:47:39.451425    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffd8ac20b899"
	I0327 16:47:39.463036    8959 logs.go:123] Gathering logs for container status ...
	I0327 16:47:39.463048    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:47:39.475134    8959 logs.go:123] Gathering logs for etcd [56ea780761c8] ...
	I0327 16:47:39.475148    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ea780761c8"
	I0327 16:47:39.489471    8959 logs.go:123] Gathering logs for coredns [25c0222d62b6] ...
	I0327 16:47:39.489481    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25c0222d62b6"
	I0327 16:47:39.501074    8959 logs.go:123] Gathering logs for dmesg ...
	I0327 16:47:39.501085    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:47:39.505488    8959 logs.go:123] Gathering logs for kube-scheduler [c581a3f09ed3] ...
	I0327 16:47:39.505498    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c581a3f09ed3"
	I0327 16:47:39.520612    8959 logs.go:123] Gathering logs for kube-proxy [0dc2c64bef5d] ...
	I0327 16:47:39.520624    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc2c64bef5d"
	I0327 16:47:39.532517    8959 logs.go:123] Gathering logs for kube-controller-manager [fa00948ec909] ...
	I0327 16:47:39.532528    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa00948ec909"
	I0327 16:47:39.550101    8959 logs.go:123] Gathering logs for Docker ...
	I0327 16:47:39.550111    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:47:39.573155    8959 logs.go:123] Gathering logs for kubelet ...
	I0327 16:47:39.573164    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0327 16:47:39.609770    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:47:39.609864    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:47:39.610892    8959 logs.go:123] Gathering logs for kube-apiserver [aa3afc402172] ...
	I0327 16:47:39.610897    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa3afc402172"
	I0327 16:47:39.625010    8959 logs.go:123] Gathering logs for kube-apiserver [e20a2e974eba] ...
	I0327 16:47:39.625021    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e20a2e974eba"
	I0327 16:47:39.669384    8959 logs.go:123] Gathering logs for kube-controller-manager [f76badbaa6c8] ...
	I0327 16:47:39.669396    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76badbaa6c8"
	I0327 16:47:39.682572    8959 logs.go:123] Gathering logs for storage-provisioner [396dc8c2621b] ...
	I0327 16:47:39.682582    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396dc8c2621b"
	I0327 16:47:39.693962    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:47:39.693971    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 16:47:39.694008    8959 out.go:239] X Problems detected in kubelet:
	W0327 16:47:39.694012    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:47:39.694017    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:47:39.694022    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:47:39.694024    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:47:44.113381    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:47:49.115488    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:47:49.115673    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:47:49.129793    8800 logs.go:276] 1 containers: [a8b68fa373a2]
	I0327 16:47:49.129868    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:47:49.141310    8800 logs.go:276] 1 containers: [b2b36dcbf471]
	I0327 16:47:49.141381    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:47:49.156625    8800 logs.go:276] 2 containers: [a1738554adca b5329cb28332]
	I0327 16:47:49.156692    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:47:49.169701    8800 logs.go:276] 1 containers: [5aa5f8fb90cc]
	I0327 16:47:49.169777    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:47:49.180042    8800 logs.go:276] 1 containers: [4195d96c1f8a]
	I0327 16:47:49.180118    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:47:49.190573    8800 logs.go:276] 1 containers: [faf151c9cff5]
	I0327 16:47:49.190641    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:47:49.200490    8800 logs.go:276] 0 containers: []
	W0327 16:47:49.200503    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:47:49.200566    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:47:49.211524    8800 logs.go:276] 1 containers: [2c4ccf3e69ae]
	I0327 16:47:49.211544    8800 logs.go:123] Gathering logs for kube-proxy [4195d96c1f8a] ...
	I0327 16:47:49.211549    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4195d96c1f8a"
	I0327 16:47:49.223399    8800 logs.go:123] Gathering logs for kube-controller-manager [faf151c9cff5] ...
	I0327 16:47:49.223410    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf151c9cff5"
	I0327 16:47:49.240988    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:47:49.241001    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:47:49.265122    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:47:49.265128    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:47:49.298148    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:47:49.298155    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:47:49.302431    8800 logs.go:123] Gathering logs for kube-apiserver [a8b68fa373a2] ...
	I0327 16:47:49.302436    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b68fa373a2"
	I0327 16:47:49.317133    8800 logs.go:123] Gathering logs for etcd [b2b36dcbf471] ...
	I0327 16:47:49.317148    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2b36dcbf471"
	I0327 16:47:49.331103    8800 logs.go:123] Gathering logs for coredns [b5329cb28332] ...
	I0327 16:47:49.331113    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5329cb28332"
	I0327 16:47:49.343022    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:47:49.343033    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:47:49.354698    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:47:49.354708    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:47:49.391002    8800 logs.go:123] Gathering logs for coredns [a1738554adca] ...
	I0327 16:47:49.391013    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1738554adca"
	I0327 16:47:49.402911    8800 logs.go:123] Gathering logs for kube-scheduler [5aa5f8fb90cc] ...
	I0327 16:47:49.402925    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa5f8fb90cc"
	I0327 16:47:49.423458    8800 logs.go:123] Gathering logs for storage-provisioner [2c4ccf3e69ae] ...
	I0327 16:47:49.423468    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4ccf3e69ae"
	I0327 16:47:51.936462    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:47:49.697835    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:47:56.938340    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:47:56.938535    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:47:56.960306    8800 logs.go:276] 1 containers: [a8b68fa373a2]
	I0327 16:47:56.960402    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:47:56.980245    8800 logs.go:276] 1 containers: [b2b36dcbf471]
	I0327 16:47:56.980325    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:47:56.992629    8800 logs.go:276] 2 containers: [a1738554adca b5329cb28332]
	I0327 16:47:56.992701    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:47:57.005532    8800 logs.go:276] 1 containers: [5aa5f8fb90cc]
	I0327 16:47:57.005598    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:47:57.016183    8800 logs.go:276] 1 containers: [4195d96c1f8a]
	I0327 16:47:57.016260    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:47:57.026957    8800 logs.go:276] 1 containers: [faf151c9cff5]
	I0327 16:47:57.027023    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:47:57.037172    8800 logs.go:276] 0 containers: []
	W0327 16:47:57.037181    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:47:57.037235    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:47:57.047449    8800 logs.go:276] 1 containers: [2c4ccf3e69ae]
	I0327 16:47:57.047466    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:47:57.047471    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:47:57.072607    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:47:57.072622    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:47:57.084921    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:47:57.084936    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:47:57.089424    8800 logs.go:123] Gathering logs for kube-apiserver [a8b68fa373a2] ...
	I0327 16:47:57.089431    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b68fa373a2"
	I0327 16:47:57.103335    8800 logs.go:123] Gathering logs for coredns [b5329cb28332] ...
	I0327 16:47:57.103350    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5329cb28332"
	I0327 16:47:57.115374    8800 logs.go:123] Gathering logs for coredns [a1738554adca] ...
	I0327 16:47:57.115385    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1738554adca"
	I0327 16:47:57.129312    8800 logs.go:123] Gathering logs for kube-scheduler [5aa5f8fb90cc] ...
	I0327 16:47:57.129322    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa5f8fb90cc"
	I0327 16:47:57.151272    8800 logs.go:123] Gathering logs for kube-proxy [4195d96c1f8a] ...
	I0327 16:47:57.151283    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4195d96c1f8a"
	I0327 16:47:57.163024    8800 logs.go:123] Gathering logs for kube-controller-manager [faf151c9cff5] ...
	I0327 16:47:57.163035    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf151c9cff5"
	I0327 16:47:57.180390    8800 logs.go:123] Gathering logs for storage-provisioner [2c4ccf3e69ae] ...
	I0327 16:47:57.180399    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4ccf3e69ae"
	I0327 16:47:57.191667    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:47:57.191677    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:47:57.224705    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:47:57.224714    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:47:57.258955    8800 logs.go:123] Gathering logs for etcd [b2b36dcbf471] ...
	I0327 16:47:57.258968    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2b36dcbf471"
	I0327 16:47:54.700113    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:47:54.700446    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:47:54.730616    8959 logs.go:276] 2 containers: [aa3afc402172 e20a2e974eba]
	I0327 16:47:54.730730    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:47:54.748202    8959 logs.go:276] 2 containers: [d74fd36a706c 56ea780761c8]
	I0327 16:47:54.748292    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:47:54.761976    8959 logs.go:276] 1 containers: [25c0222d62b6]
	I0327 16:47:54.762049    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:47:54.777246    8959 logs.go:276] 2 containers: [62111993cdda c581a3f09ed3]
	I0327 16:47:54.777323    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:47:54.787880    8959 logs.go:276] 1 containers: [0dc2c64bef5d]
	I0327 16:47:54.787954    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:47:54.798382    8959 logs.go:276] 2 containers: [fa00948ec909 f76badbaa6c8]
	I0327 16:47:54.798452    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:47:54.809232    8959 logs.go:276] 0 containers: []
	W0327 16:47:54.809243    8959 logs.go:278] No container was found matching "kindnet"
	I0327 16:47:54.809302    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:47:54.821062    8959 logs.go:276] 2 containers: [396dc8c2621b ffd8ac20b899]
	I0327 16:47:54.821080    8959 logs.go:123] Gathering logs for etcd [56ea780761c8] ...
	I0327 16:47:54.821084    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ea780761c8"
	I0327 16:47:54.835111    8959 logs.go:123] Gathering logs for coredns [25c0222d62b6] ...
	I0327 16:47:54.835120    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25c0222d62b6"
	I0327 16:47:54.850684    8959 logs.go:123] Gathering logs for kube-scheduler [c581a3f09ed3] ...
	I0327 16:47:54.850696    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c581a3f09ed3"
	I0327 16:47:54.866194    8959 logs.go:123] Gathering logs for kube-controller-manager [fa00948ec909] ...
	I0327 16:47:54.866204    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa00948ec909"
	I0327 16:47:54.885468    8959 logs.go:123] Gathering logs for kube-controller-manager [f76badbaa6c8] ...
	I0327 16:47:54.885483    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76badbaa6c8"
	I0327 16:47:54.899849    8959 logs.go:123] Gathering logs for kube-apiserver [e20a2e974eba] ...
	I0327 16:47:54.899861    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e20a2e974eba"
	I0327 16:47:54.938612    8959 logs.go:123] Gathering logs for etcd [d74fd36a706c] ...
	I0327 16:47:54.938627    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74fd36a706c"
	I0327 16:47:54.952730    8959 logs.go:123] Gathering logs for container status ...
	I0327 16:47:54.952745    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:47:54.964776    8959 logs.go:123] Gathering logs for kubelet ...
	I0327 16:47:54.964786    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0327 16:47:55.001111    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:47:55.001208    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:47:55.002270    8959 logs.go:123] Gathering logs for kube-apiserver [aa3afc402172] ...
	I0327 16:47:55.002276    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa3afc402172"
	I0327 16:47:55.021199    8959 logs.go:123] Gathering logs for kube-scheduler [62111993cdda] ...
	I0327 16:47:55.021209    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62111993cdda"
	I0327 16:47:55.033017    8959 logs.go:123] Gathering logs for kube-proxy [0dc2c64bef5d] ...
	I0327 16:47:55.033028    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc2c64bef5d"
	I0327 16:47:55.044270    8959 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:47:55.044280    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:47:55.079972    8959 logs.go:123] Gathering logs for storage-provisioner [396dc8c2621b] ...
	I0327 16:47:55.079984    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396dc8c2621b"
	I0327 16:47:55.091387    8959 logs.go:123] Gathering logs for storage-provisioner [ffd8ac20b899] ...
	I0327 16:47:55.091397    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffd8ac20b899"
	I0327 16:47:55.102440    8959 logs.go:123] Gathering logs for Docker ...
	I0327 16:47:55.102451    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:47:55.125141    8959 logs.go:123] Gathering logs for dmesg ...
	I0327 16:47:55.125149    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:47:55.129055    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:47:55.129062    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 16:47:55.129085    8959 out.go:239] X Problems detected in kubelet:
	W0327 16:47:55.129089    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:47:55.129093    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:47:55.129097    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:47:55.129100    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:47:59.775087    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:48:04.777534    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:48:04.777927    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:48:04.812541    8800 logs.go:276] 1 containers: [a8b68fa373a2]
	I0327 16:48:04.812687    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:48:04.833589    8800 logs.go:276] 1 containers: [b2b36dcbf471]
	I0327 16:48:04.833684    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:48:04.848870    8800 logs.go:276] 4 containers: [d8aeaa1c9b02 a88693a2d6c1 a1738554adca b5329cb28332]
	I0327 16:48:04.848951    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:48:04.861446    8800 logs.go:276] 1 containers: [5aa5f8fb90cc]
	I0327 16:48:04.861522    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:48:04.872326    8800 logs.go:276] 1 containers: [4195d96c1f8a]
	I0327 16:48:04.872395    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:48:04.882783    8800 logs.go:276] 1 containers: [faf151c9cff5]
	I0327 16:48:04.882844    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:48:04.892936    8800 logs.go:276] 0 containers: []
	W0327 16:48:04.892949    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:48:04.893010    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:48:04.906402    8800 logs.go:276] 1 containers: [2c4ccf3e69ae]
	I0327 16:48:04.906420    8800 logs.go:123] Gathering logs for coredns [d8aeaa1c9b02] ...
	I0327 16:48:04.906426    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8aeaa1c9b02"
	I0327 16:48:04.917699    8800 logs.go:123] Gathering logs for kube-controller-manager [faf151c9cff5] ...
	I0327 16:48:04.917711    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf151c9cff5"
	I0327 16:48:04.935588    8800 logs.go:123] Gathering logs for storage-provisioner [2c4ccf3e69ae] ...
	I0327 16:48:04.935599    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4ccf3e69ae"
	I0327 16:48:04.949030    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:48:04.949040    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:48:04.983432    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:48:04.983439    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:48:04.987782    8800 logs.go:123] Gathering logs for coredns [b5329cb28332] ...
	I0327 16:48:04.987790    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5329cb28332"
	I0327 16:48:05.005084    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:48:05.005097    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:48:05.030716    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:48:05.030726    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:48:05.066620    8800 logs.go:123] Gathering logs for kube-apiserver [a8b68fa373a2] ...
	I0327 16:48:05.066632    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b68fa373a2"
	I0327 16:48:05.080666    8800 logs.go:123] Gathering logs for etcd [b2b36dcbf471] ...
	I0327 16:48:05.080676    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2b36dcbf471"
	I0327 16:48:05.095700    8800 logs.go:123] Gathering logs for coredns [a88693a2d6c1] ...
	I0327 16:48:05.095710    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88693a2d6c1"
	I0327 16:48:05.109495    8800 logs.go:123] Gathering logs for coredns [a1738554adca] ...
	I0327 16:48:05.109506    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1738554adca"
	I0327 16:48:05.121498    8800 logs.go:123] Gathering logs for kube-scheduler [5aa5f8fb90cc] ...
	I0327 16:48:05.121508    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa5f8fb90cc"
	I0327 16:48:05.135477    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:48:05.135488    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:48:05.147299    8800 logs.go:123] Gathering logs for kube-proxy [4195d96c1f8a] ...
	I0327 16:48:05.147309    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4195d96c1f8a"
	I0327 16:48:07.660875    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:48:05.132237    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:48:12.663343    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:48:12.663613    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:48:12.689161    8800 logs.go:276] 1 containers: [a8b68fa373a2]
	I0327 16:48:12.689339    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:48:12.706119    8800 logs.go:276] 1 containers: [b2b36dcbf471]
	I0327 16:48:12.706197    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:48:12.720032    8800 logs.go:276] 4 containers: [d8aeaa1c9b02 a88693a2d6c1 a1738554adca b5329cb28332]
	I0327 16:48:12.720102    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:48:12.731104    8800 logs.go:276] 1 containers: [5aa5f8fb90cc]
	I0327 16:48:12.731166    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:48:12.741617    8800 logs.go:276] 1 containers: [4195d96c1f8a]
	I0327 16:48:12.741685    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:48:12.756199    8800 logs.go:276] 1 containers: [faf151c9cff5]
	I0327 16:48:12.756265    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:48:12.766325    8800 logs.go:276] 0 containers: []
	W0327 16:48:12.766335    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:48:12.766394    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:48:12.776350    8800 logs.go:276] 1 containers: [2c4ccf3e69ae]
	I0327 16:48:12.776367    8800 logs.go:123] Gathering logs for etcd [b2b36dcbf471] ...
	I0327 16:48:12.776373    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2b36dcbf471"
	I0327 16:48:12.790590    8800 logs.go:123] Gathering logs for coredns [b5329cb28332] ...
	I0327 16:48:12.790601    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5329cb28332"
	I0327 16:48:12.801891    8800 logs.go:123] Gathering logs for storage-provisioner [2c4ccf3e69ae] ...
	I0327 16:48:12.801902    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4ccf3e69ae"
	I0327 16:48:12.813508    8800 logs.go:123] Gathering logs for kube-apiserver [a8b68fa373a2] ...
	I0327 16:48:12.813519    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b68fa373a2"
	I0327 16:48:12.828069    8800 logs.go:123] Gathering logs for kube-proxy [4195d96c1f8a] ...
	I0327 16:48:12.828079    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4195d96c1f8a"
	I0327 16:48:12.839407    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:48:12.839416    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:48:10.134328    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:48:10.134570    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:48:10.160664    8959 logs.go:276] 2 containers: [aa3afc402172 e20a2e974eba]
	I0327 16:48:10.160772    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:48:10.179341    8959 logs.go:276] 2 containers: [d74fd36a706c 56ea780761c8]
	I0327 16:48:10.179427    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:48:10.192671    8959 logs.go:276] 1 containers: [25c0222d62b6]
	I0327 16:48:10.192744    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:48:10.203786    8959 logs.go:276] 2 containers: [62111993cdda c581a3f09ed3]
	I0327 16:48:10.203860    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:48:10.214303    8959 logs.go:276] 1 containers: [0dc2c64bef5d]
	I0327 16:48:10.214372    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:48:10.225345    8959 logs.go:276] 2 containers: [fa00948ec909 f76badbaa6c8]
	I0327 16:48:10.225413    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:48:10.236140    8959 logs.go:276] 0 containers: []
	W0327 16:48:10.236153    8959 logs.go:278] No container was found matching "kindnet"
	I0327 16:48:10.236208    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:48:10.246578    8959 logs.go:276] 2 containers: [396dc8c2621b ffd8ac20b899]
	I0327 16:48:10.246597    8959 logs.go:123] Gathering logs for kube-apiserver [aa3afc402172] ...
	I0327 16:48:10.246602    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa3afc402172"
	I0327 16:48:10.260145    8959 logs.go:123] Gathering logs for etcd [d74fd36a706c] ...
	I0327 16:48:10.260156    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74fd36a706c"
	I0327 16:48:10.274129    8959 logs.go:123] Gathering logs for etcd [56ea780761c8] ...
	I0327 16:48:10.274139    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ea780761c8"
	I0327 16:48:10.289089    8959 logs.go:123] Gathering logs for kube-controller-manager [f76badbaa6c8] ...
	I0327 16:48:10.289098    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76badbaa6c8"
	I0327 16:48:10.302286    8959 logs.go:123] Gathering logs for storage-provisioner [396dc8c2621b] ...
	I0327 16:48:10.302296    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396dc8c2621b"
	I0327 16:48:10.316849    8959 logs.go:123] Gathering logs for container status ...
	I0327 16:48:10.316860    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:48:10.329040    8959 logs.go:123] Gathering logs for kubelet ...
	I0327 16:48:10.329051    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0327 16:48:10.365447    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:48:10.365547    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:48:10.366607    8959 logs.go:123] Gathering logs for kube-apiserver [e20a2e974eba] ...
	I0327 16:48:10.366616    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e20a2e974eba"
	I0327 16:48:10.404693    8959 logs.go:123] Gathering logs for kube-controller-manager [fa00948ec909] ...
	I0327 16:48:10.404703    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa00948ec909"
	I0327 16:48:10.422133    8959 logs.go:123] Gathering logs for Docker ...
	I0327 16:48:10.422142    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:48:10.445614    8959 logs.go:123] Gathering logs for dmesg ...
	I0327 16:48:10.445624    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:48:10.450154    8959 logs.go:123] Gathering logs for kube-scheduler [62111993cdda] ...
	I0327 16:48:10.450160    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62111993cdda"
	I0327 16:48:10.462848    8959 logs.go:123] Gathering logs for kube-scheduler [c581a3f09ed3] ...
	I0327 16:48:10.462861    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c581a3f09ed3"
	I0327 16:48:10.478410    8959 logs.go:123] Gathering logs for storage-provisioner [ffd8ac20b899] ...
	I0327 16:48:10.478420    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffd8ac20b899"
	I0327 16:48:10.497252    8959 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:48:10.497264    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:48:10.533671    8959 logs.go:123] Gathering logs for coredns [25c0222d62b6] ...
	I0327 16:48:10.533682    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25c0222d62b6"
	I0327 16:48:10.545946    8959 logs.go:123] Gathering logs for kube-proxy [0dc2c64bef5d] ...
	I0327 16:48:10.545963    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc2c64bef5d"
	I0327 16:48:10.558478    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:48:10.558489    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 16:48:10.558513    8959 out.go:239] X Problems detected in kubelet:
	W0327 16:48:10.558519    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:48:10.558525    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:48:10.558530    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:48:10.558533    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:48:12.863424    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:48:12.863432    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:48:12.875457    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:48:12.875467    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:48:12.908945    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:48:12.908957    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:48:12.913709    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:48:12.913716    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:48:12.949973    8800 logs.go:123] Gathering logs for coredns [a1738554adca] ...
	I0327 16:48:12.949984    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1738554adca"
	I0327 16:48:12.962046    8800 logs.go:123] Gathering logs for kube-controller-manager [faf151c9cff5] ...
	I0327 16:48:12.962058    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf151c9cff5"
	I0327 16:48:12.982989    8800 logs.go:123] Gathering logs for coredns [d8aeaa1c9b02] ...
	I0327 16:48:12.982999    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8aeaa1c9b02"
	I0327 16:48:12.994791    8800 logs.go:123] Gathering logs for coredns [a88693a2d6c1] ...
	I0327 16:48:12.994805    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88693a2d6c1"
	I0327 16:48:13.013853    8800 logs.go:123] Gathering logs for kube-scheduler [5aa5f8fb90cc] ...
	I0327 16:48:13.013865    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa5f8fb90cc"
	I0327 16:48:15.535733    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:48:20.536454    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:48:20.536654    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:48:20.552192    8800 logs.go:276] 1 containers: [a8b68fa373a2]
	I0327 16:48:20.552277    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:48:20.565001    8800 logs.go:276] 1 containers: [b2b36dcbf471]
	I0327 16:48:20.565064    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:48:20.576705    8800 logs.go:276] 4 containers: [d8aeaa1c9b02 a88693a2d6c1 a1738554adca b5329cb28332]
	I0327 16:48:20.576781    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:48:20.587114    8800 logs.go:276] 1 containers: [5aa5f8fb90cc]
	I0327 16:48:20.587189    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:48:20.597982    8800 logs.go:276] 1 containers: [4195d96c1f8a]
	I0327 16:48:20.598056    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:48:20.608667    8800 logs.go:276] 1 containers: [faf151c9cff5]
	I0327 16:48:20.608732    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:48:20.619642    8800 logs.go:276] 0 containers: []
	W0327 16:48:20.619655    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:48:20.619708    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:48:20.634066    8800 logs.go:276] 1 containers: [2c4ccf3e69ae]
	I0327 16:48:20.634082    8800 logs.go:123] Gathering logs for coredns [b5329cb28332] ...
	I0327 16:48:20.634086    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5329cb28332"
	I0327 16:48:20.645823    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:48:20.645832    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:48:20.670799    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:48:20.670806    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:48:20.682596    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:48:20.682607    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:48:20.686919    8800 logs.go:123] Gathering logs for coredns [a88693a2d6c1] ...
	I0327 16:48:20.686926    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88693a2d6c1"
	I0327 16:48:20.698719    8800 logs.go:123] Gathering logs for kube-controller-manager [faf151c9cff5] ...
	I0327 16:48:20.698729    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf151c9cff5"
	I0327 16:48:20.715625    8800 logs.go:123] Gathering logs for kube-scheduler [5aa5f8fb90cc] ...
	I0327 16:48:20.715637    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa5f8fb90cc"
	I0327 16:48:20.730103    8800 logs.go:123] Gathering logs for kube-proxy [4195d96c1f8a] ...
	I0327 16:48:20.730115    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4195d96c1f8a"
	I0327 16:48:20.742036    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:48:20.742050    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:48:20.778293    8800 logs.go:123] Gathering logs for kube-apiserver [a8b68fa373a2] ...
	I0327 16:48:20.778306    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b68fa373a2"
	I0327 16:48:20.792552    8800 logs.go:123] Gathering logs for coredns [d8aeaa1c9b02] ...
	I0327 16:48:20.792564    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8aeaa1c9b02"
	I0327 16:48:20.804371    8800 logs.go:123] Gathering logs for coredns [a1738554adca] ...
	I0327 16:48:20.804387    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1738554adca"
	I0327 16:48:20.817159    8800 logs.go:123] Gathering logs for storage-provisioner [2c4ccf3e69ae] ...
	I0327 16:48:20.817170    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4ccf3e69ae"
	I0327 16:48:20.829078    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:48:20.829089    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:48:20.863566    8800 logs.go:123] Gathering logs for etcd [b2b36dcbf471] ...
	I0327 16:48:20.863576    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2b36dcbf471"
	I0327 16:48:20.561269    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:48:23.379949    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:48:25.563326    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:48:25.563536    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:48:25.576449    8959 logs.go:276] 2 containers: [aa3afc402172 e20a2e974eba]
	I0327 16:48:25.576531    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:48:25.592228    8959 logs.go:276] 2 containers: [d74fd36a706c 56ea780761c8]
	I0327 16:48:25.592295    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:48:25.602573    8959 logs.go:276] 1 containers: [25c0222d62b6]
	I0327 16:48:25.602645    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:48:25.612820    8959 logs.go:276] 2 containers: [62111993cdda c581a3f09ed3]
	I0327 16:48:25.612885    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:48:25.622931    8959 logs.go:276] 1 containers: [0dc2c64bef5d]
	I0327 16:48:25.623001    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:48:25.633604    8959 logs.go:276] 2 containers: [fa00948ec909 f76badbaa6c8]
	I0327 16:48:25.633678    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:48:25.644138    8959 logs.go:276] 0 containers: []
	W0327 16:48:25.644148    8959 logs.go:278] No container was found matching "kindnet"
	I0327 16:48:25.644204    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:48:25.654901    8959 logs.go:276] 2 containers: [396dc8c2621b ffd8ac20b899]
	I0327 16:48:25.654919    8959 logs.go:123] Gathering logs for container status ...
	I0327 16:48:25.654927    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:48:25.667526    8959 logs.go:123] Gathering logs for etcd [56ea780761c8] ...
	I0327 16:48:25.667536    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ea780761c8"
	I0327 16:48:25.682548    8959 logs.go:123] Gathering logs for coredns [25c0222d62b6] ...
	I0327 16:48:25.682558    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25c0222d62b6"
	I0327 16:48:25.693947    8959 logs.go:123] Gathering logs for kube-controller-manager [fa00948ec909] ...
	I0327 16:48:25.693958    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa00948ec909"
	I0327 16:48:25.711633    8959 logs.go:123] Gathering logs for storage-provisioner [ffd8ac20b899] ...
	I0327 16:48:25.711643    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffd8ac20b899"
	I0327 16:48:25.723076    8959 logs.go:123] Gathering logs for storage-provisioner [396dc8c2621b] ...
	I0327 16:48:25.723089    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396dc8c2621b"
	I0327 16:48:25.734839    8959 logs.go:123] Gathering logs for Docker ...
	I0327 16:48:25.734850    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:48:25.757690    8959 logs.go:123] Gathering logs for etcd [d74fd36a706c] ...
	I0327 16:48:25.757696    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74fd36a706c"
	I0327 16:48:25.772859    8959 logs.go:123] Gathering logs for kube-scheduler [62111993cdda] ...
	I0327 16:48:25.772873    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62111993cdda"
	I0327 16:48:25.784501    8959 logs.go:123] Gathering logs for kube-scheduler [c581a3f09ed3] ...
	I0327 16:48:25.784515    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c581a3f09ed3"
	I0327 16:48:25.799263    8959 logs.go:123] Gathering logs for kubelet ...
	I0327 16:48:25.799274    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0327 16:48:25.835357    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:48:25.835455    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:48:25.836454    8959 logs.go:123] Gathering logs for dmesg ...
	I0327 16:48:25.836461    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:48:25.840721    8959 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:48:25.840726    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:48:25.876861    8959 logs.go:123] Gathering logs for kube-apiserver [aa3afc402172] ...
	I0327 16:48:25.876875    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa3afc402172"
	I0327 16:48:25.890691    8959 logs.go:123] Gathering logs for kube-apiserver [e20a2e974eba] ...
	I0327 16:48:25.890702    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e20a2e974eba"
	I0327 16:48:25.930510    8959 logs.go:123] Gathering logs for kube-proxy [0dc2c64bef5d] ...
	I0327 16:48:25.930520    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc2c64bef5d"
	I0327 16:48:25.942956    8959 logs.go:123] Gathering logs for kube-controller-manager [f76badbaa6c8] ...
	I0327 16:48:25.942971    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76badbaa6c8"
	I0327 16:48:25.956528    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:48:25.956540    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 16:48:25.956565    8959 out.go:239] X Problems detected in kubelet:
	W0327 16:48:25.956569    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:48:25.956573    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:48:25.956577    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:48:25.956581    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:48:28.382116    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:48:28.382385    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:48:28.403747    8800 logs.go:276] 1 containers: [a8b68fa373a2]
	I0327 16:48:28.403855    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:48:28.419719    8800 logs.go:276] 1 containers: [b2b36dcbf471]
	I0327 16:48:28.419801    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:48:28.433164    8800 logs.go:276] 4 containers: [d8aeaa1c9b02 a88693a2d6c1 a1738554adca b5329cb28332]
	I0327 16:48:28.433234    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:48:28.444523    8800 logs.go:276] 1 containers: [5aa5f8fb90cc]
	I0327 16:48:28.444590    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:48:28.454663    8800 logs.go:276] 1 containers: [4195d96c1f8a]
	I0327 16:48:28.454730    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:48:28.465631    8800 logs.go:276] 1 containers: [faf151c9cff5]
	I0327 16:48:28.465694    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:48:28.475496    8800 logs.go:276] 0 containers: []
	W0327 16:48:28.475507    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:48:28.475571    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:48:28.486089    8800 logs.go:276] 1 containers: [2c4ccf3e69ae]
	I0327 16:48:28.486109    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:48:28.486114    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:48:28.490708    8800 logs.go:123] Gathering logs for etcd [b2b36dcbf471] ...
	I0327 16:48:28.490718    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2b36dcbf471"
	I0327 16:48:28.504803    8800 logs.go:123] Gathering logs for kube-proxy [4195d96c1f8a] ...
	I0327 16:48:28.504813    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4195d96c1f8a"
	I0327 16:48:28.516691    8800 logs.go:123] Gathering logs for kube-apiserver [a8b68fa373a2] ...
	I0327 16:48:28.516703    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b68fa373a2"
	I0327 16:48:28.531078    8800 logs.go:123] Gathering logs for coredns [a1738554adca] ...
	I0327 16:48:28.531090    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1738554adca"
	I0327 16:48:28.548158    8800 logs.go:123] Gathering logs for coredns [b5329cb28332] ...
	I0327 16:48:28.548170    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5329cb28332"
	I0327 16:48:28.560614    8800 logs.go:123] Gathering logs for kube-controller-manager [faf151c9cff5] ...
	I0327 16:48:28.560624    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf151c9cff5"
	I0327 16:48:28.578254    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:48:28.578263    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:48:28.616580    8800 logs.go:123] Gathering logs for coredns [d8aeaa1c9b02] ...
	I0327 16:48:28.616591    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8aeaa1c9b02"
	I0327 16:48:28.628342    8800 logs.go:123] Gathering logs for storage-provisioner [2c4ccf3e69ae] ...
	I0327 16:48:28.628352    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4ccf3e69ae"
	I0327 16:48:28.639623    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:48:28.639633    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:48:28.664037    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:48:28.664044    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:48:28.697462    8800 logs.go:123] Gathering logs for coredns [a88693a2d6c1] ...
	I0327 16:48:28.697471    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88693a2d6c1"
	I0327 16:48:28.712205    8800 logs.go:123] Gathering logs for kube-scheduler [5aa5f8fb90cc] ...
	I0327 16:48:28.712217    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa5f8fb90cc"
	I0327 16:48:28.726971    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:48:28.726982    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:48:31.244164    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:48:36.244365    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:48:36.244528    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:48:36.255512    8800 logs.go:276] 1 containers: [a8b68fa373a2]
	I0327 16:48:36.255586    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:48:36.265695    8800 logs.go:276] 1 containers: [b2b36dcbf471]
	I0327 16:48:36.265758    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:48:36.276173    8800 logs.go:276] 4 containers: [d8aeaa1c9b02 a88693a2d6c1 a1738554adca b5329cb28332]
	I0327 16:48:36.276247    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:48:36.286852    8800 logs.go:276] 1 containers: [5aa5f8fb90cc]
	I0327 16:48:36.286925    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:48:36.297266    8800 logs.go:276] 1 containers: [4195d96c1f8a]
	I0327 16:48:36.297330    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:48:36.307767    8800 logs.go:276] 1 containers: [faf151c9cff5]
	I0327 16:48:36.307833    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:48:36.317659    8800 logs.go:276] 0 containers: []
	W0327 16:48:36.317671    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:48:36.317729    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:48:36.329049    8800 logs.go:276] 1 containers: [2c4ccf3e69ae]
	I0327 16:48:36.329065    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:48:36.329069    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:48:36.354437    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:48:36.354446    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:48:36.358592    8800 logs.go:123] Gathering logs for coredns [a1738554adca] ...
	I0327 16:48:36.358598    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1738554adca"
	I0327 16:48:36.373787    8800 logs.go:123] Gathering logs for kube-apiserver [a8b68fa373a2] ...
	I0327 16:48:36.373798    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b68fa373a2"
	I0327 16:48:36.388835    8800 logs.go:123] Gathering logs for coredns [d8aeaa1c9b02] ...
	I0327 16:48:36.388845    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8aeaa1c9b02"
	I0327 16:48:36.404512    8800 logs.go:123] Gathering logs for kube-proxy [4195d96c1f8a] ...
	I0327 16:48:36.404523    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4195d96c1f8a"
	I0327 16:48:36.415938    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:48:36.415947    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:48:36.427390    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:48:36.427402    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:48:36.463936    8800 logs.go:123] Gathering logs for etcd [b2b36dcbf471] ...
	I0327 16:48:36.463948    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2b36dcbf471"
	I0327 16:48:36.478118    8800 logs.go:123] Gathering logs for coredns [a88693a2d6c1] ...
	I0327 16:48:36.478129    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88693a2d6c1"
	I0327 16:48:36.490107    8800 logs.go:123] Gathering logs for coredns [b5329cb28332] ...
	I0327 16:48:36.490118    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5329cb28332"
	I0327 16:48:36.503088    8800 logs.go:123] Gathering logs for kube-scheduler [5aa5f8fb90cc] ...
	I0327 16:48:36.503098    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa5f8fb90cc"
	I0327 16:48:36.516915    8800 logs.go:123] Gathering logs for kube-controller-manager [faf151c9cff5] ...
	I0327 16:48:36.516929    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf151c9cff5"
	I0327 16:48:36.534060    8800 logs.go:123] Gathering logs for storage-provisioner [2c4ccf3e69ae] ...
	I0327 16:48:36.534070    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4ccf3e69ae"
	I0327 16:48:36.549122    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:48:36.549133    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:48:35.958717    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:48:39.085935    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:48:40.961190    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:48:40.961301    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:48:40.983422    8959 logs.go:276] 2 containers: [aa3afc402172 e20a2e974eba]
	I0327 16:48:40.983506    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:48:40.994899    8959 logs.go:276] 2 containers: [d74fd36a706c 56ea780761c8]
	I0327 16:48:40.994973    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:48:41.005870    8959 logs.go:276] 1 containers: [25c0222d62b6]
	I0327 16:48:41.005947    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:48:41.017097    8959 logs.go:276] 2 containers: [62111993cdda c581a3f09ed3]
	I0327 16:48:41.017165    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:48:41.031062    8959 logs.go:276] 1 containers: [0dc2c64bef5d]
	I0327 16:48:41.031135    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:48:41.041797    8959 logs.go:276] 2 containers: [fa00948ec909 f76badbaa6c8]
	I0327 16:48:41.041861    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:48:41.052604    8959 logs.go:276] 0 containers: []
	W0327 16:48:41.052618    8959 logs.go:278] No container was found matching "kindnet"
	I0327 16:48:41.052672    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:48:41.066373    8959 logs.go:276] 2 containers: [396dc8c2621b ffd8ac20b899]
	I0327 16:48:41.066389    8959 logs.go:123] Gathering logs for container status ...
	I0327 16:48:41.066396    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:48:41.078090    8959 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:48:41.078102    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:48:41.116051    8959 logs.go:123] Gathering logs for kube-scheduler [c581a3f09ed3] ...
	I0327 16:48:41.116068    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c581a3f09ed3"
	I0327 16:48:41.131546    8959 logs.go:123] Gathering logs for storage-provisioner [ffd8ac20b899] ...
	I0327 16:48:41.131558    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffd8ac20b899"
	I0327 16:48:41.142555    8959 logs.go:123] Gathering logs for storage-provisioner [396dc8c2621b] ...
	I0327 16:48:41.142567    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396dc8c2621b"
	I0327 16:48:41.154109    8959 logs.go:123] Gathering logs for Docker ...
	I0327 16:48:41.154121    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:48:41.175683    8959 logs.go:123] Gathering logs for kubelet ...
	I0327 16:48:41.175693    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0327 16:48:41.211176    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:48:41.211275    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:48:41.212273    8959 logs.go:123] Gathering logs for kube-apiserver [aa3afc402172] ...
	I0327 16:48:41.212279    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa3afc402172"
	I0327 16:48:41.231075    8959 logs.go:123] Gathering logs for etcd [56ea780761c8] ...
	I0327 16:48:41.231091    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ea780761c8"
	I0327 16:48:41.245830    8959 logs.go:123] Gathering logs for kube-scheduler [62111993cdda] ...
	I0327 16:48:41.245840    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62111993cdda"
	I0327 16:48:41.257867    8959 logs.go:123] Gathering logs for kube-controller-manager [fa00948ec909] ...
	I0327 16:48:41.257876    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa00948ec909"
	I0327 16:48:41.278102    8959 logs.go:123] Gathering logs for coredns [25c0222d62b6] ...
	I0327 16:48:41.278113    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25c0222d62b6"
	I0327 16:48:41.291986    8959 logs.go:123] Gathering logs for kube-proxy [0dc2c64bef5d] ...
	I0327 16:48:41.291996    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc2c64bef5d"
	I0327 16:48:41.303923    8959 logs.go:123] Gathering logs for kube-controller-manager [f76badbaa6c8] ...
	I0327 16:48:41.303934    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76badbaa6c8"
	I0327 16:48:41.317407    8959 logs.go:123] Gathering logs for dmesg ...
	I0327 16:48:41.317416    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:48:41.321738    8959 logs.go:123] Gathering logs for kube-apiserver [e20a2e974eba] ...
	I0327 16:48:41.321745    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e20a2e974eba"
	I0327 16:48:41.358978    8959 logs.go:123] Gathering logs for etcd [d74fd36a706c] ...
	I0327 16:48:41.358988    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74fd36a706c"
	I0327 16:48:41.379581    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:48:41.379600    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 16:48:41.379630    8959 out.go:239] X Problems detected in kubelet:
	W0327 16:48:41.379636    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:48:41.379640    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:48:41.379647    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:48:41.379651    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:48:44.088066    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:48:44.088295    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:48:44.111659    8800 logs.go:276] 1 containers: [a8b68fa373a2]
	I0327 16:48:44.111774    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:48:44.127332    8800 logs.go:276] 1 containers: [b2b36dcbf471]
	I0327 16:48:44.127412    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:48:44.144177    8800 logs.go:276] 4 containers: [d8aeaa1c9b02 a88693a2d6c1 a1738554adca b5329cb28332]
	I0327 16:48:44.144247    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:48:44.154979    8800 logs.go:276] 1 containers: [5aa5f8fb90cc]
	I0327 16:48:44.155045    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:48:44.165188    8800 logs.go:276] 1 containers: [4195d96c1f8a]
	I0327 16:48:44.165248    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:48:44.175923    8800 logs.go:276] 1 containers: [faf151c9cff5]
	I0327 16:48:44.176000    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:48:44.185854    8800 logs.go:276] 0 containers: []
	W0327 16:48:44.185868    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:48:44.185926    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:48:44.195918    8800 logs.go:276] 1 containers: [2c4ccf3e69ae]
	I0327 16:48:44.195936    8800 logs.go:123] Gathering logs for etcd [b2b36dcbf471] ...
	I0327 16:48:44.195941    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2b36dcbf471"
	I0327 16:48:44.209778    8800 logs.go:123] Gathering logs for coredns [d8aeaa1c9b02] ...
	I0327 16:48:44.209790    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8aeaa1c9b02"
	I0327 16:48:44.234104    8800 logs.go:123] Gathering logs for coredns [a88693a2d6c1] ...
	I0327 16:48:44.234117    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88693a2d6c1"
	I0327 16:48:44.265293    8800 logs.go:123] Gathering logs for kube-scheduler [5aa5f8fb90cc] ...
	I0327 16:48:44.265307    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa5f8fb90cc"
	I0327 16:48:44.279362    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:48:44.279371    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:48:44.311963    8800 logs.go:123] Gathering logs for storage-provisioner [2c4ccf3e69ae] ...
	I0327 16:48:44.311971    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4ccf3e69ae"
	I0327 16:48:44.324384    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:48:44.324394    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:48:44.349681    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:48:44.349688    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:48:44.361292    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:48:44.361302    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:48:44.366055    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:48:44.366061    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:48:44.401917    8800 logs.go:123] Gathering logs for coredns [a1738554adca] ...
	I0327 16:48:44.401928    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1738554adca"
	I0327 16:48:44.414061    8800 logs.go:123] Gathering logs for kube-controller-manager [faf151c9cff5] ...
	I0327 16:48:44.414071    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf151c9cff5"
	I0327 16:48:44.431301    8800 logs.go:123] Gathering logs for kube-apiserver [a8b68fa373a2] ...
	I0327 16:48:44.431311    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b68fa373a2"
	I0327 16:48:44.445351    8800 logs.go:123] Gathering logs for coredns [b5329cb28332] ...
	I0327 16:48:44.445362    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5329cb28332"
	I0327 16:48:44.461502    8800 logs.go:123] Gathering logs for kube-proxy [4195d96c1f8a] ...
	I0327 16:48:44.461513    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4195d96c1f8a"
	I0327 16:48:46.975416    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:48:51.978009    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:48:51.978416    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:48:52.017373    8800 logs.go:276] 1 containers: [a8b68fa373a2]
	I0327 16:48:52.017510    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:48:52.038741    8800 logs.go:276] 1 containers: [b2b36dcbf471]
	I0327 16:48:52.038841    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:48:52.054039    8800 logs.go:276] 4 containers: [d8aeaa1c9b02 a88693a2d6c1 a1738554adca b5329cb28332]
	I0327 16:48:52.054128    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:48:52.066402    8800 logs.go:276] 1 containers: [5aa5f8fb90cc]
	I0327 16:48:52.066469    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:48:52.077349    8800 logs.go:276] 1 containers: [4195d96c1f8a]
	I0327 16:48:52.077423    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:48:52.088238    8800 logs.go:276] 1 containers: [faf151c9cff5]
	I0327 16:48:52.088306    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:48:52.099222    8800 logs.go:276] 0 containers: []
	W0327 16:48:52.099233    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:48:52.099289    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:48:52.110214    8800 logs.go:276] 1 containers: [2c4ccf3e69ae]
	I0327 16:48:52.110231    8800 logs.go:123] Gathering logs for coredns [d8aeaa1c9b02] ...
	I0327 16:48:52.110236    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8aeaa1c9b02"
	I0327 16:48:52.122134    8800 logs.go:123] Gathering logs for coredns [a88693a2d6c1] ...
	I0327 16:48:52.122147    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88693a2d6c1"
	I0327 16:48:52.138050    8800 logs.go:123] Gathering logs for kube-proxy [4195d96c1f8a] ...
	I0327 16:48:52.138062    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4195d96c1f8a"
	I0327 16:48:52.150064    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:48:52.150075    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:48:52.174328    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:48:52.174337    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:48:52.185631    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:48:52.185641    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:48:52.189995    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:48:52.190001    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:48:52.227072    8800 logs.go:123] Gathering logs for kube-apiserver [a8b68fa373a2] ...
	I0327 16:48:52.227085    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b68fa373a2"
	I0327 16:48:52.241772    8800 logs.go:123] Gathering logs for coredns [a1738554adca] ...
	I0327 16:48:52.241785    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1738554adca"
	I0327 16:48:52.254746    8800 logs.go:123] Gathering logs for storage-provisioner [2c4ccf3e69ae] ...
	I0327 16:48:52.254758    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4ccf3e69ae"
	I0327 16:48:52.271242    8800 logs.go:123] Gathering logs for coredns [b5329cb28332] ...
	I0327 16:48:52.271252    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5329cb28332"
	I0327 16:48:52.283165    8800 logs.go:123] Gathering logs for kube-scheduler [5aa5f8fb90cc] ...
	I0327 16:48:52.283178    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa5f8fb90cc"
	I0327 16:48:52.305499    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:48:52.305510    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:48:52.339478    8800 logs.go:123] Gathering logs for etcd [b2b36dcbf471] ...
	I0327 16:48:52.339490    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2b36dcbf471"
	I0327 16:48:52.354092    8800 logs.go:123] Gathering logs for kube-controller-manager [faf151c9cff5] ...
	I0327 16:48:52.354104    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf151c9cff5"
	I0327 16:48:51.383351    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:48:56.385413    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:48:56.385495    8959 kubeadm.go:591] duration metric: took 4m7.206701625s to restartPrimaryControlPlane
	W0327 16:48:56.385544    8959 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0327 16:48:56.385567    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0327 16:48:57.426024    8959 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.04047825s)
	I0327 16:48:57.426106    8959 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0327 16:48:57.430968    8959 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0327 16:48:57.433654    8959 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0327 16:48:57.436470    8959 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0327 16:48:57.436478    8959 kubeadm.go:156] found existing configuration files:
	
	I0327 16:48:57.436503    8959 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51421 /etc/kubernetes/admin.conf
	I0327 16:48:57.439014    8959 kubeadm.go:162] "https://control-plane.minikube.internal:51421" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51421 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0327 16:48:57.439037    8959 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0327 16:48:57.441647    8959 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51421 /etc/kubernetes/kubelet.conf
	I0327 16:48:57.444700    8959 kubeadm.go:162] "https://control-plane.minikube.internal:51421" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51421 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0327 16:48:57.444724    8959 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0327 16:48:57.448112    8959 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51421 /etc/kubernetes/controller-manager.conf
	I0327 16:48:57.450916    8959 kubeadm.go:162] "https://control-plane.minikube.internal:51421" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51421 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0327 16:48:57.450937    8959 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0327 16:48:57.453451    8959 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51421 /etc/kubernetes/scheduler.conf
	I0327 16:48:57.456483    8959 kubeadm.go:162] "https://control-plane.minikube.internal:51421" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51421 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0327 16:48:57.456504    8959 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0327 16:48:57.459544    8959 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0327 16:48:57.477398    8959 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0327 16:48:57.477435    8959 kubeadm.go:309] [preflight] Running pre-flight checks
	I0327 16:48:57.528531    8959 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0327 16:48:57.528591    8959 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0327 16:48:57.528644    8959 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0327 16:48:57.581082    8959 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0327 16:48:54.874182    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:48:57.586244    8959 out.go:204]   - Generating certificates and keys ...
	I0327 16:48:57.586280    8959 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0327 16:48:57.586310    8959 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0327 16:48:57.586352    8959 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0327 16:48:57.586401    8959 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0327 16:48:57.586434    8959 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0327 16:48:57.586459    8959 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0327 16:48:57.586486    8959 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0327 16:48:57.586516    8959 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0327 16:48:57.586555    8959 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0327 16:48:57.586599    8959 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0327 16:48:57.586629    8959 kubeadm.go:309] [certs] Using the existing "sa" key
	I0327 16:48:57.586675    8959 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0327 16:48:57.742816    8959 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0327 16:48:57.795684    8959 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0327 16:48:57.894115    8959 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0327 16:48:58.061954    8959 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0327 16:48:58.092231    8959 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0327 16:48:58.092587    8959 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0327 16:48:58.092611    8959 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0327 16:48:58.182370    8959 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0327 16:48:58.184211    8959 out.go:204]   - Booting up control plane ...
	I0327 16:48:58.184255    8959 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0327 16:48:58.184301    8959 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0327 16:48:58.186492    8959 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0327 16:48:58.186885    8959 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0327 16:48:58.187789    8959 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0327 16:48:59.876352    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:48:59.876474    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:48:59.887623    8800 logs.go:276] 1 containers: [a8b68fa373a2]
	I0327 16:48:59.887729    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:48:59.899462    8800 logs.go:276] 1 containers: [b2b36dcbf471]
	I0327 16:48:59.899531    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:48:59.911024    8800 logs.go:276] 4 containers: [d8aeaa1c9b02 a88693a2d6c1 a1738554adca b5329cb28332]
	I0327 16:48:59.911104    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:48:59.922848    8800 logs.go:276] 1 containers: [5aa5f8fb90cc]
	I0327 16:48:59.922921    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:48:59.935747    8800 logs.go:276] 1 containers: [4195d96c1f8a]
	I0327 16:48:59.935818    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:48:59.947475    8800 logs.go:276] 1 containers: [faf151c9cff5]
	I0327 16:48:59.947549    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:48:59.958186    8800 logs.go:276] 0 containers: []
	W0327 16:48:59.958197    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:48:59.958257    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:48:59.969427    8800 logs.go:276] 1 containers: [2c4ccf3e69ae]
	I0327 16:48:59.969443    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:48:59.969448    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:49:00.007690    8800 logs.go:123] Gathering logs for kube-apiserver [a8b68fa373a2] ...
	I0327 16:49:00.007703    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b68fa373a2"
	I0327 16:49:00.023118    8800 logs.go:123] Gathering logs for coredns [b5329cb28332] ...
	I0327 16:49:00.023131    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5329cb28332"
	I0327 16:49:00.036582    8800 logs.go:123] Gathering logs for kube-scheduler [5aa5f8fb90cc] ...
	I0327 16:49:00.036592    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa5f8fb90cc"
	I0327 16:49:00.052259    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:49:00.052270    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:49:00.079073    8800 logs.go:123] Gathering logs for coredns [d8aeaa1c9b02] ...
	I0327 16:49:00.079086    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8aeaa1c9b02"
	I0327 16:49:00.091170    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:49:00.091185    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:49:00.105764    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:49:00.105781    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:49:00.110403    8800 logs.go:123] Gathering logs for etcd [b2b36dcbf471] ...
	I0327 16:49:00.110415    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2b36dcbf471"
	I0327 16:49:00.124958    8800 logs.go:123] Gathering logs for coredns [a1738554adca] ...
	I0327 16:49:00.124971    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1738554adca"
	I0327 16:49:00.138855    8800 logs.go:123] Gathering logs for kube-controller-manager [faf151c9cff5] ...
	I0327 16:49:00.138867    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf151c9cff5"
	I0327 16:49:00.157792    8800 logs.go:123] Gathering logs for storage-provisioner [2c4ccf3e69ae] ...
	I0327 16:49:00.157806    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4ccf3e69ae"
	I0327 16:49:00.171031    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:49:00.171044    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:49:00.208581    8800 logs.go:123] Gathering logs for coredns [a88693a2d6c1] ...
	I0327 16:49:00.208603    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88693a2d6c1"
	I0327 16:49:00.221015    8800 logs.go:123] Gathering logs for kube-proxy [4195d96c1f8a] ...
	I0327 16:49:00.221028    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4195d96c1f8a"
	I0327 16:49:02.740420    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:49:02.189334    8959 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.001352 seconds
	I0327 16:49:02.189399    8959 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0327 16:49:02.192694    8959 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0327 16:49:02.703240    8959 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0327 16:49:02.703414    8959 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-017000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0327 16:49:03.206560    8959 kubeadm.go:309] [bootstrap-token] Using token: jf7d6m.20yewdtyrk7ztvoa
	I0327 16:49:03.212955    8959 out.go:204]   - Configuring RBAC rules ...
	I0327 16:49:03.213019    8959 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0327 16:49:03.213064    8959 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0327 16:49:03.218532    8959 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0327 16:49:03.219556    8959 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0327 16:49:03.220146    8959 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0327 16:49:03.220985    8959 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0327 16:49:03.224107    8959 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0327 16:49:03.388861    8959 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0327 16:49:03.612724    8959 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0327 16:49:03.613095    8959 kubeadm.go:309] 
	I0327 16:49:03.613130    8959 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0327 16:49:03.613133    8959 kubeadm.go:309] 
	I0327 16:49:03.613176    8959 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0327 16:49:03.613185    8959 kubeadm.go:309] 
	I0327 16:49:03.613197    8959 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0327 16:49:03.613230    8959 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0327 16:49:03.613258    8959 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0327 16:49:03.613263    8959 kubeadm.go:309] 
	I0327 16:49:03.613288    8959 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0327 16:49:03.613296    8959 kubeadm.go:309] 
	I0327 16:49:03.613331    8959 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0327 16:49:03.613334    8959 kubeadm.go:309] 
	I0327 16:49:03.613364    8959 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0327 16:49:03.613399    8959 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0327 16:49:03.613442    8959 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0327 16:49:03.613446    8959 kubeadm.go:309] 
	I0327 16:49:03.613495    8959 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0327 16:49:03.613550    8959 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0327 16:49:03.613555    8959 kubeadm.go:309] 
	I0327 16:49:03.613599    8959 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token jf7d6m.20yewdtyrk7ztvoa \
	I0327 16:49:03.613660    8959 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8047b7e049f0384af96cc555849ef1f992fa8884768aff95c9a460200a82d884 \
	I0327 16:49:03.613672    8959 kubeadm.go:309] 	--control-plane 
	I0327 16:49:03.613675    8959 kubeadm.go:309] 
	I0327 16:49:03.613722    8959 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0327 16:49:03.613726    8959 kubeadm.go:309] 
	I0327 16:49:03.613766    8959 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token jf7d6m.20yewdtyrk7ztvoa \
	I0327 16:49:03.613831    8959 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8047b7e049f0384af96cc555849ef1f992fa8884768aff95c9a460200a82d884 
	I0327 16:49:03.614049    8959 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0327 16:49:03.614058    8959 cni.go:84] Creating CNI manager for ""
	I0327 16:49:03.614066    8959 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 16:49:03.618206    8959 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0327 16:49:03.626272    8959 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0327 16:49:03.629185    8959 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0327 16:49:03.634046    8959 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0327 16:49:03.634097    8959 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 16:49:03.634146    8959 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-017000 minikube.k8s.io/updated_at=2024_03_27T16_49_03_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d minikube.k8s.io/name=stopped-upgrade-017000 minikube.k8s.io/primary=true
	I0327 16:49:03.675422    8959 kubeadm.go:1107] duration metric: took 41.366667ms to wait for elevateKubeSystemPrivileges
	I0327 16:49:03.675437    8959 ops.go:34] apiserver oom_adj: -16
	W0327 16:49:03.675527    8959 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0327 16:49:03.675533    8959 kubeadm.go:393] duration metric: took 4m14.510520541s to StartCluster
	I0327 16:49:03.675542    8959 settings.go:142] acquiring lock: {Name:mk7a184fa834ec55a805b998fd083319e6561206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 16:49:03.675626    8959 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:49:03.676025    8959 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18485-6511/kubeconfig: {Name:mke46d0809919cfbe0118c5110926d6ce61bf373 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 16:49:03.676238    8959 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 16:49:03.679238    8959 out.go:177] * Verifying Kubernetes components...
	I0327 16:49:03.676246    8959 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0327 16:49:03.676316    8959 config.go:182] Loaded profile config "stopped-upgrade-017000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 16:49:03.689261    8959 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 16:49:03.689281    8959 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-017000"
	I0327 16:49:03.689285    8959 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-017000"
	I0327 16:49:03.689296    8959 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-017000"
	I0327 16:49:03.689299    8959 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-017000"
	W0327 16:49:03.689300    8959 addons.go:243] addon storage-provisioner should already be in state true
	I0327 16:49:03.689323    8959 host.go:66] Checking if "stopped-upgrade-017000" exists ...
	I0327 16:49:03.689799    8959 retry.go:31] will retry after 671.004227ms: connect: dial unix /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/stopped-upgrade-017000/monitor: connect: connection refused
	I0327 16:49:03.693177    8959 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 16:49:07.742527    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:49:07.742716    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:49:07.754864    8800 logs.go:276] 1 containers: [a8b68fa373a2]
	I0327 16:49:07.754945    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:49:07.765513    8800 logs.go:276] 1 containers: [b2b36dcbf471]
	I0327 16:49:07.765584    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:49:07.776297    8800 logs.go:276] 4 containers: [d8aeaa1c9b02 a88693a2d6c1 a1738554adca b5329cb28332]
	I0327 16:49:07.776363    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:49:07.787576    8800 logs.go:276] 1 containers: [5aa5f8fb90cc]
	I0327 16:49:07.787649    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:49:07.798360    8800 logs.go:276] 1 containers: [4195d96c1f8a]
	I0327 16:49:07.798430    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:49:07.809543    8800 logs.go:276] 1 containers: [faf151c9cff5]
	I0327 16:49:07.809609    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:49:07.820056    8800 logs.go:276] 0 containers: []
	W0327 16:49:07.820069    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:49:07.820125    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:49:07.830796    8800 logs.go:276] 1 containers: [2c4ccf3e69ae]
	I0327 16:49:07.830814    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:49:07.830829    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:49:07.835502    8800 logs.go:123] Gathering logs for coredns [d8aeaa1c9b02] ...
	I0327 16:49:07.835511    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8aeaa1c9b02"
	I0327 16:49:07.847967    8800 logs.go:123] Gathering logs for kube-scheduler [5aa5f8fb90cc] ...
	I0327 16:49:07.847978    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa5f8fb90cc"
	I0327 16:49:03.697229    8959 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 16:49:03.697236    8959 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0327 16:49:03.697243    8959 sshutil.go:53] new ssh client: &{IP:localhost Port:51386 SSHKeyPath:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/stopped-upgrade-017000/id_rsa Username:docker}
	I0327 16:49:03.782643    8959 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 16:49:03.787600    8959 api_server.go:52] waiting for apiserver process to appear ...
	I0327 16:49:03.787650    8959 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 16:49:03.791658    8959 api_server.go:72] duration metric: took 115.413708ms to wait for apiserver process to appear ...
	I0327 16:49:03.791665    8959 api_server.go:88] waiting for apiserver healthz status ...
	I0327 16:49:03.791672    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:49:03.829177    8959 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 16:49:04.363865    8959 kapi.go:59] client config for stopped-upgrade-017000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/stopped-upgrade-017000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/stopped-upgrade-017000/client.key", CAFile:"/Users/jenkins/minikube-integration/18485-6511/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103b96c70), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0327 16:49:04.363994    8959 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-017000"
	W0327 16:49:04.364000    8959 addons.go:243] addon default-storageclass should already be in state true
	I0327 16:49:04.364011    8959 host.go:66] Checking if "stopped-upgrade-017000" exists ...
	I0327 16:49:04.364771    8959 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0327 16:49:04.364777    8959 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0327 16:49:04.364784    8959 sshutil.go:53] new ssh client: &{IP:localhost Port:51386 SSHKeyPath:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/stopped-upgrade-017000/id_rsa Username:docker}
	I0327 16:49:04.399189    8959 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0327 16:49:07.862640    8800 logs.go:123] Gathering logs for kube-proxy [4195d96c1f8a] ...
	I0327 16:49:07.862653    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4195d96c1f8a"
	I0327 16:49:07.874409    8800 logs.go:123] Gathering logs for storage-provisioner [2c4ccf3e69ae] ...
	I0327 16:49:07.874419    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4ccf3e69ae"
	I0327 16:49:07.885678    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:49:07.885688    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:49:07.909204    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:49:07.909214    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:49:07.920924    8800 logs.go:123] Gathering logs for etcd [b2b36dcbf471] ...
	I0327 16:49:07.920934    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2b36dcbf471"
	I0327 16:49:07.934702    8800 logs.go:123] Gathering logs for coredns [b5329cb28332] ...
	I0327 16:49:07.934712    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5329cb28332"
	I0327 16:49:07.946585    8800 logs.go:123] Gathering logs for kube-controller-manager [faf151c9cff5] ...
	I0327 16:49:07.946597    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf151c9cff5"
	I0327 16:49:07.965090    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:49:07.965103    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:49:07.999971    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:49:07.999979    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:49:08.041681    8800 logs.go:123] Gathering logs for kube-apiserver [a8b68fa373a2] ...
	I0327 16:49:08.041692    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b68fa373a2"
	I0327 16:49:08.056571    8800 logs.go:123] Gathering logs for coredns [a88693a2d6c1] ...
	I0327 16:49:08.056582    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88693a2d6c1"
	I0327 16:49:08.068720    8800 logs.go:123] Gathering logs for coredns [a1738554adca] ...
	I0327 16:49:08.068731    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1738554adca"
	I0327 16:49:10.583972    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:49:08.793652    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:49:08.793712    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:49:15.585982    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:49:15.586127    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:49:15.596667    8800 logs.go:276] 1 containers: [a8b68fa373a2]
	I0327 16:49:15.596727    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:49:15.608230    8800 logs.go:276] 1 containers: [b2b36dcbf471]
	I0327 16:49:15.608324    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:49:15.619339    8800 logs.go:276] 4 containers: [d8aeaa1c9b02 a88693a2d6c1 a1738554adca b5329cb28332]
	I0327 16:49:15.619411    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:49:15.630175    8800 logs.go:276] 1 containers: [5aa5f8fb90cc]
	I0327 16:49:15.630256    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:49:15.640859    8800 logs.go:276] 1 containers: [4195d96c1f8a]
	I0327 16:49:15.640928    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:49:15.651496    8800 logs.go:276] 1 containers: [faf151c9cff5]
	I0327 16:49:15.651563    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:49:15.667053    8800 logs.go:276] 0 containers: []
	W0327 16:49:15.667073    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:49:15.667127    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:49:15.677869    8800 logs.go:276] 1 containers: [2c4ccf3e69ae]
	I0327 16:49:15.677885    8800 logs.go:123] Gathering logs for kube-scheduler [5aa5f8fb90cc] ...
	I0327 16:49:15.677889    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa5f8fb90cc"
	I0327 16:49:15.692169    8800 logs.go:123] Gathering logs for kube-controller-manager [faf151c9cff5] ...
	I0327 16:49:15.692180    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf151c9cff5"
	I0327 16:49:15.709690    8800 logs.go:123] Gathering logs for storage-provisioner [2c4ccf3e69ae] ...
	I0327 16:49:15.709705    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4ccf3e69ae"
	I0327 16:49:15.721273    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:49:15.721287    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:49:15.732791    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:49:15.732803    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:49:15.737334    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:49:15.737343    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:49:15.775921    8800 logs.go:123] Gathering logs for coredns [b5329cb28332] ...
	I0327 16:49:15.775930    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5329cb28332"
	I0327 16:49:15.787267    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:49:15.787278    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:49:15.811940    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:49:15.811947    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:49:15.847147    8800 logs.go:123] Gathering logs for coredns [a1738554adca] ...
	I0327 16:49:15.847160    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1738554adca"
	I0327 16:49:15.859348    8800 logs.go:123] Gathering logs for kube-proxy [4195d96c1f8a] ...
	I0327 16:49:15.859360    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4195d96c1f8a"
	I0327 16:49:15.877601    8800 logs.go:123] Gathering logs for kube-apiserver [a8b68fa373a2] ...
	I0327 16:49:15.877612    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b68fa373a2"
	I0327 16:49:15.896309    8800 logs.go:123] Gathering logs for etcd [b2b36dcbf471] ...
	I0327 16:49:15.896319    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2b36dcbf471"
	I0327 16:49:15.910327    8800 logs.go:123] Gathering logs for coredns [d8aeaa1c9b02] ...
	I0327 16:49:15.910336    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8aeaa1c9b02"
	I0327 16:49:15.922035    8800 logs.go:123] Gathering logs for coredns [a88693a2d6c1] ...
	I0327 16:49:15.922044    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88693a2d6c1"
	I0327 16:49:13.793877    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:49:13.793977    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:49:18.435012    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:49:18.794420    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:49:18.794466    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:49:23.435242    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:49:23.435438    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:49:23.465132    8800 logs.go:276] 1 containers: [a8b68fa373a2]
	I0327 16:49:23.465215    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:49:23.483210    8800 logs.go:276] 1 containers: [b2b36dcbf471]
	I0327 16:49:23.483314    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:49:23.498849    8800 logs.go:276] 4 containers: [d8aeaa1c9b02 a88693a2d6c1 a1738554adca b5329cb28332]
	I0327 16:49:23.498924    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:49:23.510354    8800 logs.go:276] 1 containers: [5aa5f8fb90cc]
	I0327 16:49:23.510429    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:49:23.521069    8800 logs.go:276] 1 containers: [4195d96c1f8a]
	I0327 16:49:23.521139    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:49:23.531400    8800 logs.go:276] 1 containers: [faf151c9cff5]
	I0327 16:49:23.531476    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:49:23.541899    8800 logs.go:276] 0 containers: []
	W0327 16:49:23.541912    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:49:23.541974    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:49:23.552475    8800 logs.go:276] 1 containers: [2c4ccf3e69ae]
	I0327 16:49:23.552494    8800 logs.go:123] Gathering logs for coredns [a1738554adca] ...
	I0327 16:49:23.552499    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1738554adca"
	I0327 16:49:23.564529    8800 logs.go:123] Gathering logs for kube-scheduler [5aa5f8fb90cc] ...
	I0327 16:49:23.564539    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa5f8fb90cc"
	I0327 16:49:23.579119    8800 logs.go:123] Gathering logs for kube-proxy [4195d96c1f8a] ...
	I0327 16:49:23.579129    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4195d96c1f8a"
	I0327 16:49:23.591520    8800 logs.go:123] Gathering logs for storage-provisioner [2c4ccf3e69ae] ...
	I0327 16:49:23.591531    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4ccf3e69ae"
	I0327 16:49:23.603072    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:49:23.603083    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:49:23.614520    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:49:23.614533    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:49:23.651130    8800 logs.go:123] Gathering logs for coredns [d8aeaa1c9b02] ...
	I0327 16:49:23.651144    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8aeaa1c9b02"
	I0327 16:49:23.663915    8800 logs.go:123] Gathering logs for coredns [a88693a2d6c1] ...
	I0327 16:49:23.663926    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88693a2d6c1"
	I0327 16:49:23.676067    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:49:23.676078    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:49:23.700737    8800 logs.go:123] Gathering logs for coredns [b5329cb28332] ...
	I0327 16:49:23.700746    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5329cb28332"
	I0327 16:49:23.712833    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:49:23.712842    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:49:23.745823    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:49:23.745834    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:49:23.749923    8800 logs.go:123] Gathering logs for kube-apiserver [a8b68fa373a2] ...
	I0327 16:49:23.749931    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b68fa373a2"
	I0327 16:49:23.765544    8800 logs.go:123] Gathering logs for etcd [b2b36dcbf471] ...
	I0327 16:49:23.765558    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2b36dcbf471"
	I0327 16:49:23.779936    8800 logs.go:123] Gathering logs for kube-controller-manager [faf151c9cff5] ...
	I0327 16:49:23.779946    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf151c9cff5"
	I0327 16:49:26.300121    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:49:23.794826    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:49:23.794847    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:49:31.302259    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:49:31.302417    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:49:31.316521    8800 logs.go:276] 1 containers: [a8b68fa373a2]
	I0327 16:49:31.316601    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:49:31.327760    8800 logs.go:276] 1 containers: [b2b36dcbf471]
	I0327 16:49:31.327828    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:49:31.338235    8800 logs.go:276] 4 containers: [d8aeaa1c9b02 a88693a2d6c1 a1738554adca b5329cb28332]
	I0327 16:49:31.338312    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:49:31.348515    8800 logs.go:276] 1 containers: [5aa5f8fb90cc]
	I0327 16:49:31.348578    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:49:31.358864    8800 logs.go:276] 1 containers: [4195d96c1f8a]
	I0327 16:49:31.358932    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:49:31.369253    8800 logs.go:276] 1 containers: [faf151c9cff5]
	I0327 16:49:31.369315    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:49:31.381215    8800 logs.go:276] 0 containers: []
	W0327 16:49:31.381225    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:49:31.381283    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:49:31.391885    8800 logs.go:276] 1 containers: [2c4ccf3e69ae]
	I0327 16:49:31.391901    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:49:31.391905    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:49:31.415482    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:49:31.415490    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:49:31.449781    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:49:31.449794    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:49:31.454187    8800 logs.go:123] Gathering logs for etcd [b2b36dcbf471] ...
	I0327 16:49:31.454196    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2b36dcbf471"
	I0327 16:49:31.479781    8800 logs.go:123] Gathering logs for coredns [b5329cb28332] ...
	I0327 16:49:31.479792    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5329cb28332"
	I0327 16:49:31.491799    8800 logs.go:123] Gathering logs for kube-scheduler [5aa5f8fb90cc] ...
	I0327 16:49:31.491813    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa5f8fb90cc"
	I0327 16:49:31.506689    8800 logs.go:123] Gathering logs for kube-controller-manager [faf151c9cff5] ...
	I0327 16:49:31.506700    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf151c9cff5"
	I0327 16:49:31.524169    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:49:31.524178    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:49:31.535907    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:49:31.535917    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:49:31.571579    8800 logs.go:123] Gathering logs for coredns [d8aeaa1c9b02] ...
	I0327 16:49:31.571591    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8aeaa1c9b02"
	I0327 16:49:31.583994    8800 logs.go:123] Gathering logs for coredns [a1738554adca] ...
	I0327 16:49:31.584007    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1738554adca"
	I0327 16:49:31.595626    8800 logs.go:123] Gathering logs for kube-apiserver [a8b68fa373a2] ...
	I0327 16:49:31.595635    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b68fa373a2"
	I0327 16:49:31.613240    8800 logs.go:123] Gathering logs for coredns [a88693a2d6c1] ...
	I0327 16:49:31.613249    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88693a2d6c1"
	I0327 16:49:31.624915    8800 logs.go:123] Gathering logs for kube-proxy [4195d96c1f8a] ...
	I0327 16:49:31.624931    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4195d96c1f8a"
	I0327 16:49:31.638423    8800 logs.go:123] Gathering logs for storage-provisioner [2c4ccf3e69ae] ...
	I0327 16:49:31.638432    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4ccf3e69ae"
	I0327 16:49:28.795311    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:49:28.795356    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:49:33.795710    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:49:33.795748    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0327 16:49:34.455419    8959 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0327 16:49:34.459838    8959 out.go:177] * Enabled addons: storage-provisioner
	I0327 16:49:34.157925    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:49:34.471742    8959 addons.go:505] duration metric: took 30.796492375s for enable addons: enabled=[storage-provisioner]
	I0327 16:49:39.160102    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:49:39.160290    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:49:39.190686    8800 logs.go:276] 1 containers: [a8b68fa373a2]
	I0327 16:49:39.190777    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:49:39.205652    8800 logs.go:276] 1 containers: [b2b36dcbf471]
	I0327 16:49:39.205735    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:49:39.217556    8800 logs.go:276] 4 containers: [d8aeaa1c9b02 a88693a2d6c1 a1738554adca b5329cb28332]
	I0327 16:49:39.217629    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:49:39.227953    8800 logs.go:276] 1 containers: [5aa5f8fb90cc]
	I0327 16:49:39.228019    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:49:39.238680    8800 logs.go:276] 1 containers: [4195d96c1f8a]
	I0327 16:49:39.238748    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:49:39.249651    8800 logs.go:276] 1 containers: [faf151c9cff5]
	I0327 16:49:39.249718    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:49:39.263281    8800 logs.go:276] 0 containers: []
	W0327 16:49:39.263292    8800 logs.go:278] No container was found matching "kindnet"
	I0327 16:49:39.263348    8800 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:49:39.273654    8800 logs.go:276] 1 containers: [2c4ccf3e69ae]
	I0327 16:49:39.273671    8800 logs.go:123] Gathering logs for storage-provisioner [2c4ccf3e69ae] ...
	I0327 16:49:39.273675    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c4ccf3e69ae"
	I0327 16:49:39.285000    8800 logs.go:123] Gathering logs for kube-apiserver [a8b68fa373a2] ...
	I0327 16:49:39.285010    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a8b68fa373a2"
	I0327 16:49:39.299826    8800 logs.go:123] Gathering logs for coredns [d8aeaa1c9b02] ...
	I0327 16:49:39.299835    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d8aeaa1c9b02"
	I0327 16:49:39.311284    8800 logs.go:123] Gathering logs for kube-proxy [4195d96c1f8a] ...
	I0327 16:49:39.311296    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4195d96c1f8a"
	I0327 16:49:39.323897    8800 logs.go:123] Gathering logs for kube-controller-manager [faf151c9cff5] ...
	I0327 16:49:39.323907    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 faf151c9cff5"
	I0327 16:49:39.341379    8800 logs.go:123] Gathering logs for dmesg ...
	I0327 16:49:39.341389    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:49:39.346009    8800 logs.go:123] Gathering logs for coredns [b5329cb28332] ...
	I0327 16:49:39.346015    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b5329cb28332"
	I0327 16:49:39.357532    8800 logs.go:123] Gathering logs for Docker ...
	I0327 16:49:39.357543    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:49:39.379898    8800 logs.go:123] Gathering logs for etcd [b2b36dcbf471] ...
	I0327 16:49:39.379905    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b2b36dcbf471"
	I0327 16:49:39.394216    8800 logs.go:123] Gathering logs for coredns [a1738554adca] ...
	I0327 16:49:39.394227    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a1738554adca"
	I0327 16:49:39.406666    8800 logs.go:123] Gathering logs for container status ...
	I0327 16:49:39.406680    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:49:39.418866    8800 logs.go:123] Gathering logs for kubelet ...
	I0327 16:49:39.418876    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0327 16:49:39.453825    8800 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:49:39.453833    8800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:49:39.487483    8800 logs.go:123] Gathering logs for coredns [a88693a2d6c1] ...
	I0327 16:49:39.487493    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a88693a2d6c1"
	I0327 16:49:39.508056    8800 logs.go:123] Gathering logs for kube-scheduler [5aa5f8fb90cc] ...
	I0327 16:49:39.508068    8800 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5aa5f8fb90cc"
	I0327 16:49:42.031551    8800 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:49:38.796885    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:49:38.796931    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:49:47.031898    8800 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:49:47.036626    8800 out.go:177] 
	W0327 16:49:47.039662    8800 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0327 16:49:47.039675    8800 out.go:239] * 
	W0327 16:49:47.040553    8800 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 16:49:47.052527    8800 out.go:177] 
	I0327 16:49:43.797008    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:49:43.797037    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:49:48.798306    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:49:48.798371    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:49:53.799982    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:49:53.800031    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Wed 2024-03-27 23:40:52 UTC, ends at Wed 2024-03-27 23:50:03 UTC. --
	Mar 27 23:49:47 running-upgrade-400000 dockerd[2913]: time="2024-03-27T23:49:47.591706979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 27 23:49:47 running-upgrade-400000 dockerd[2913]: time="2024-03-27T23:49:47.591755476Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/4141296b190446c939683d13e050348e5021ada01ff968d035905e12c5af8b4f pid=18322 runtime=io.containerd.runc.v2
	Mar 27 23:49:47 running-upgrade-400000 cri-dockerd[2755]: time="2024-03-27T23:49:47Z" level=error msg="ContainerStats resp: {0x4000546500 linux}"
	Mar 27 23:49:47 running-upgrade-400000 cri-dockerd[2755]: time="2024-03-27T23:49:47Z" level=error msg="ContainerStats resp: {0x4000547540 linux}"
	Mar 27 23:49:48 running-upgrade-400000 cri-dockerd[2755]: time="2024-03-27T23:49:48Z" level=error msg="ContainerStats resp: {0x4000923040 linux}"
	Mar 27 23:49:49 running-upgrade-400000 cri-dockerd[2755]: time="2024-03-27T23:49:49Z" level=error msg="ContainerStats resp: {0x4000339000 linux}"
	Mar 27 23:49:49 running-upgrade-400000 cri-dockerd[2755]: time="2024-03-27T23:49:49Z" level=error msg="ContainerStats resp: {0x4000339640 linux}"
	Mar 27 23:49:49 running-upgrade-400000 cri-dockerd[2755]: time="2024-03-27T23:49:49Z" level=error msg="ContainerStats resp: {0x4000358640 linux}"
	Mar 27 23:49:49 running-upgrade-400000 cri-dockerd[2755]: time="2024-03-27T23:49:49Z" level=error msg="ContainerStats resp: {0x4000359680 linux}"
	Mar 27 23:49:49 running-upgrade-400000 cri-dockerd[2755]: time="2024-03-27T23:49:49Z" level=error msg="ContainerStats resp: {0x4000484380 linux}"
	Mar 27 23:49:49 running-upgrade-400000 cri-dockerd[2755]: time="2024-03-27T23:49:49Z" level=error msg="ContainerStats resp: {0x40004847c0 linux}"
	Mar 27 23:49:49 running-upgrade-400000 cri-dockerd[2755]: time="2024-03-27T23:49:49Z" level=error msg="ContainerStats resp: {0x40004853c0 linux}"
	Mar 27 23:49:51 running-upgrade-400000 cri-dockerd[2755]: time="2024-03-27T23:49:51Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Mar 27 23:49:56 running-upgrade-400000 cri-dockerd[2755]: time="2024-03-27T23:49:56Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Mar 27 23:49:59 running-upgrade-400000 cri-dockerd[2755]: time="2024-03-27T23:49:59Z" level=error msg="ContainerStats resp: {0x4000689e00 linux}"
	Mar 27 23:49:59 running-upgrade-400000 cri-dockerd[2755]: time="2024-03-27T23:49:59Z" level=error msg="ContainerStats resp: {0x4000339100 linux}"
	Mar 27 23:50:00 running-upgrade-400000 cri-dockerd[2755]: time="2024-03-27T23:50:00Z" level=error msg="ContainerStats resp: {0x4000485240 linux}"
	Mar 27 23:50:01 running-upgrade-400000 cri-dockerd[2755]: time="2024-03-27T23:50:01Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Mar 27 23:50:01 running-upgrade-400000 cri-dockerd[2755]: time="2024-03-27T23:50:01Z" level=error msg="ContainerStats resp: {0x40007b4a80 linux}"
	Mar 27 23:50:01 running-upgrade-400000 cri-dockerd[2755]: time="2024-03-27T23:50:01Z" level=error msg="ContainerStats resp: {0x400075a5c0 linux}"
	Mar 27 23:50:01 running-upgrade-400000 cri-dockerd[2755]: time="2024-03-27T23:50:01Z" level=error msg="ContainerStats resp: {0x40007b5180 linux}"
	Mar 27 23:50:01 running-upgrade-400000 cri-dockerd[2755]: time="2024-03-27T23:50:01Z" level=error msg="ContainerStats resp: {0x400075ae40 linux}"
	Mar 27 23:50:01 running-upgrade-400000 cri-dockerd[2755]: time="2024-03-27T23:50:01Z" level=error msg="ContainerStats resp: {0x40007b5c80 linux}"
	Mar 27 23:50:01 running-upgrade-400000 cri-dockerd[2755]: time="2024-03-27T23:50:01Z" level=error msg="ContainerStats resp: {0x40009140c0 linux}"
	Mar 27 23:50:01 running-upgrade-400000 cri-dockerd[2755]: time="2024-03-27T23:50:01Z" level=error msg="ContainerStats resp: {0x40009541c0 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	98e446101ed05       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   55e4bd8cdfffb
	4141296b19044       edaa71f2aee88       16 seconds ago      Running             coredns                   2                   ff36434a339d2
	d8aeaa1c9b024       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   55e4bd8cdfffb
	a88693a2d6c17       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   ff36434a339d2
	4195d96c1f8a7       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   0198dca963db4
	2c4ccf3e69ae0       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   68c4ecd84f5ef
	b2b36dcbf4716       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   b550b5ac8ab5f
	5aa5f8fb90cc7       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   cdf8867fb9ba2
	a8b68fa373a29       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   3634ed7ff06ea
	faf151c9cff5d       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   a5f076a3d7330
	
	
	==> coredns [4141296b1904] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 8416277306589147074.508998161960309148. HINFO: read udp 10.244.0.2:59026->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8416277306589147074.508998161960309148. HINFO: read udp 10.244.0.2:38023->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 8416277306589147074.508998161960309148. HINFO: read udp 10.244.0.2:58448->10.0.2.3:53: i/o timeout
	
	
	==> coredns [98e446101ed0] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 7650899079629308133.8451613730151431141. HINFO: read udp 10.244.0.3:56596->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7650899079629308133.8451613730151431141. HINFO: read udp 10.244.0.3:44747->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 7650899079629308133.8451613730151431141. HINFO: read udp 10.244.0.3:44203->10.0.2.3:53: i/o timeout
	
	
	==> coredns [a88693a2d6c1] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3555708645378095146.2115792732861438854. HINFO: read udp 10.244.0.2:44359->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3555708645378095146.2115792732861438854. HINFO: read udp 10.244.0.2:34166->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3555708645378095146.2115792732861438854. HINFO: read udp 10.244.0.2:49235->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3555708645378095146.2115792732861438854. HINFO: read udp 10.244.0.2:51589->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3555708645378095146.2115792732861438854. HINFO: read udp 10.244.0.2:58719->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3555708645378095146.2115792732861438854. HINFO: read udp 10.244.0.2:50023->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3555708645378095146.2115792732861438854. HINFO: read udp 10.244.0.2:57322->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3555708645378095146.2115792732861438854. HINFO: read udp 10.244.0.2:43041->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3555708645378095146.2115792732861438854. HINFO: read udp 10.244.0.2:56675->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3555708645378095146.2115792732861438854. HINFO: read udp 10.244.0.2:45551->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d8aeaa1c9b02] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4515510473553864273.6105649397102379942. HINFO: read udp 10.244.0.3:42909->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4515510473553864273.6105649397102379942. HINFO: read udp 10.244.0.3:47397->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4515510473553864273.6105649397102379942. HINFO: read udp 10.244.0.3:42941->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4515510473553864273.6105649397102379942. HINFO: read udp 10.244.0.3:49345->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4515510473553864273.6105649397102379942. HINFO: read udp 10.244.0.3:58188->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4515510473553864273.6105649397102379942. HINFO: read udp 10.244.0.3:54037->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-400000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-400000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d
	                    minikube.k8s.io/name=running-upgrade-400000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_27T16_45_46_0700
	                    minikube.k8s.io/version=v1.33.0-beta.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 27 Mar 2024 23:45:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-400000
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 27 Mar 2024 23:50:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 27 Mar 2024 23:45:46 +0000   Wed, 27 Mar 2024 23:45:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 27 Mar 2024 23:45:46 +0000   Wed, 27 Mar 2024 23:45:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 27 Mar 2024 23:45:46 +0000   Wed, 27 Mar 2024 23:45:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 27 Mar 2024 23:45:46 +0000   Wed, 27 Mar 2024 23:45:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-400000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 3da3272196764a70abb5f75801a62d5f
	  System UUID:                3da3272196764a70abb5f75801a62d5f
	  Boot ID:                    bc9be456-4279-46a8-b20d-33e0890f9b28
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-67zjv                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 coredns-6d4b75cb6d-r48pb                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 etcd-running-upgrade-400000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m19s
	  kube-system                 kube-apiserver-running-upgrade-400000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-controller-manager-running-upgrade-400000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-proxy-8w6w6                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m4s
	  kube-system                 kube-scheduler-running-upgrade-400000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m3s                   kube-proxy       
	  Normal  Starting                 4m23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m23s (x5 over 4m23s)  kubelet          Node running-upgrade-400000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m23s (x5 over 4m23s)  kubelet          Node running-upgrade-400000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m23s (x3 over 4m23s)  kubelet          Node running-upgrade-400000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m17s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m17s                  kubelet          Node running-upgrade-400000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m17s                  kubelet          Node running-upgrade-400000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m17s                  kubelet          Node running-upgrade-400000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m17s                  kubelet          Node running-upgrade-400000 status is now: NodeReady
	  Normal  RegisteredNode           4m5s                   node-controller  Node running-upgrade-400000 event: Registered Node running-upgrade-400000 in Controller
	
	
	==> dmesg <==
	[  +1.480057] systemd-fstab-generator[878]: Ignoring "noauto" for root device
	[  +0.068280] systemd-fstab-generator[889]: Ignoring "noauto" for root device
	[  +0.084677] systemd-fstab-generator[900]: Ignoring "noauto" for root device
	[  +0.168584] systemd-fstab-generator[1050]: Ignoring "noauto" for root device
	[  +0.063914] systemd-fstab-generator[1061]: Ignoring "noauto" for root device
	[  +2.493640] systemd-fstab-generator[1286]: Ignoring "noauto" for root device
	[  +0.257995] kauditd_printk_skb: 92 callbacks suppressed
	[ +13.413598] systemd-fstab-generator[1949]: Ignoring "noauto" for root device
	[  +2.689229] systemd-fstab-generator[2231]: Ignoring "noauto" for root device
	[  +0.132299] systemd-fstab-generator[2265]: Ignoring "noauto" for root device
	[  +0.087066] systemd-fstab-generator[2276]: Ignoring "noauto" for root device
	[  +0.100485] systemd-fstab-generator[2289]: Ignoring "noauto" for root device
	[  +1.492600] kauditd_printk_skb: 8 callbacks suppressed
	[  +0.143673] systemd-fstab-generator[2712]: Ignoring "noauto" for root device
	[  +0.068936] systemd-fstab-generator[2723]: Ignoring "noauto" for root device
	[  +0.072485] systemd-fstab-generator[2734]: Ignoring "noauto" for root device
	[  +0.077171] systemd-fstab-generator[2748]: Ignoring "noauto" for root device
	[  +2.089156] systemd-fstab-generator[2900]: Ignoring "noauto" for root device
	[  +5.805112] systemd-fstab-generator[3309]: Ignoring "noauto" for root device
	[  +1.010304] systemd-fstab-generator[3436]: Ignoring "noauto" for root device
	[ +18.431024] kauditd_printk_skb: 68 callbacks suppressed
	[Mar27 23:45] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.409291] systemd-fstab-generator[11648]: Ignoring "noauto" for root device
	[  +5.633256] systemd-fstab-generator[12252]: Ignoring "noauto" for root device
	[  +0.464572] systemd-fstab-generator[12390]: Ignoring "noauto" for root device
	
	
	==> etcd [b2b36dcbf471] <==
	{"level":"info","ts":"2024-03-27T23:45:41.758Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-03-27T23:45:41.758Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-03-27T23:45:41.758Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-27T23:45:41.758Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-27T23:45:41.758Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-27T23:45:41.758Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-03-27T23:45:41.758Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-03-27T23:45:42.303Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-27T23:45:42.303Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-27T23:45:42.303Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-03-27T23:45:42.303Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-03-27T23:45:42.303Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-03-27T23:45:42.303Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-03-27T23:45:42.303Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-03-27T23:45:42.303Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-400000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-27T23:45:42.303Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-27T23:45:42.303Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-27T23:45:42.304Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-03-27T23:45:42.304Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-27T23:45:42.305Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-27T23:45:42.313Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-27T23:45:42.313Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-27T23:45:42.313Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-27T23:45:42.313Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-27T23:45:42.313Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 23:50:03 up 9 min,  0 users,  load average: 0.49, 0.56, 0.33
	Linux running-upgrade-400000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [a8b68fa373a2] <==
	I0327 23:45:43.550192       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0327 23:45:43.550277       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0327 23:45:43.550298       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0327 23:45:43.550437       1 cache.go:39] Caches are synced for autoregister controller
	I0327 23:45:43.560190       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0327 23:45:43.560330       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0327 23:45:43.589985       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0327 23:45:44.288751       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0327 23:45:44.456011       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0327 23:45:44.459050       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0327 23:45:44.459074       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0327 23:45:44.613011       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0327 23:45:44.625982       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0327 23:45:44.723486       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0327 23:45:44.727513       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0327 23:45:44.727963       1 controller.go:611] quota admission added evaluator for: endpoints
	I0327 23:45:44.729280       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0327 23:45:45.579667       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0327 23:45:46.129878       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0327 23:45:46.134101       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0327 23:45:46.138299       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0327 23:45:46.180703       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0327 23:45:59.237280       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0327 23:45:59.283915       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0327 23:46:00.079354       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [faf151c9cff5] <==
	I0327 23:45:58.332554       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0327 23:45:58.341160       1 shared_informer.go:262] Caches are synced for HPA
	I0327 23:45:58.381498       1 shared_informer.go:262] Caches are synced for cronjob
	I0327 23:45:58.385962       1 shared_informer.go:262] Caches are synced for daemon sets
	I0327 23:45:58.398082       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0327 23:45:58.408743       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0327 23:45:58.409838       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0327 23:45:58.409846       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0327 23:45:58.409856       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0327 23:45:58.445609       1 shared_informer.go:255] Waiting for caches to sync for garbage collector
	I0327 23:45:58.480426       1 shared_informer.go:262] Caches are synced for ephemeral
	I0327 23:45:58.483259       1 shared_informer.go:262] Caches are synced for expand
	I0327 23:45:58.486078       1 shared_informer.go:262] Caches are synced for resource quota
	I0327 23:45:58.500270       1 shared_informer.go:262] Caches are synced for persistent volume
	I0327 23:45:58.504575       1 shared_informer.go:262] Caches are synced for attach detach
	I0327 23:45:58.507707       1 shared_informer.go:262] Caches are synced for stateful set
	I0327 23:45:58.519092       1 shared_informer.go:262] Caches are synced for PVC protection
	I0327 23:45:58.534801       1 shared_informer.go:262] Caches are synced for resource quota
	I0327 23:45:58.947049       1 shared_informer.go:262] Caches are synced for garbage collector
	I0327 23:45:59.032551       1 shared_informer.go:262] Caches are synced for garbage collector
	I0327 23:45:59.032564       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0327 23:45:59.239044       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0327 23:45:59.287463       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-8w6w6"
	I0327 23:45:59.433480       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-r48pb"
	I0327 23:45:59.439218       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-67zjv"
	
	
	==> kube-proxy [4195d96c1f8a] <==
	I0327 23:46:00.054907       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0327 23:46:00.054951       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0327 23:46:00.054971       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0327 23:46:00.075886       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0327 23:46:00.075900       1 server_others.go:206] "Using iptables Proxier"
	I0327 23:46:00.075925       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0327 23:46:00.076068       1 server.go:661] "Version info" version="v1.24.1"
	I0327 23:46:00.076074       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0327 23:46:00.077588       1 config.go:317] "Starting service config controller"
	I0327 23:46:00.077644       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0327 23:46:00.077663       1 config.go:226] "Starting endpoint slice config controller"
	I0327 23:46:00.077669       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0327 23:46:00.078054       1 config.go:444] "Starting node config controller"
	I0327 23:46:00.078065       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0327 23:46:00.178554       1 shared_informer.go:262] Caches are synced for node config
	I0327 23:46:00.178571       1 shared_informer.go:262] Caches are synced for service config
	I0327 23:46:00.178587       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [5aa5f8fb90cc] <==
	W0327 23:45:43.513031       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0327 23:45:43.513976       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0327 23:45:43.513046       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0327 23:45:43.513981       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0327 23:45:43.513061       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0327 23:45:43.513985       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0327 23:45:43.513071       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0327 23:45:43.513990       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0327 23:45:43.513256       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0327 23:45:43.513994       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0327 23:45:43.513271       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0327 23:45:43.513999       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0327 23:45:43.513281       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0327 23:45:43.514003       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0327 23:45:43.513306       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0327 23:45:43.514007       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0327 23:45:43.514562       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0327 23:45:43.514571       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0327 23:45:43.514618       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0327 23:45:43.514639       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0327 23:45:44.336552       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0327 23:45:44.336585       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0327 23:45:44.450317       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0327 23:45:44.450508       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0327 23:45:45.014638       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Wed 2024-03-27 23:40:52 UTC, ends at Wed 2024-03-27 23:50:03 UTC. --
	Mar 27 23:45:48 running-upgrade-400000 kubelet[12258]: E0327 23:45:48.159956   12258 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-400000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-400000"
	Mar 27 23:45:48 running-upgrade-400000 kubelet[12258]: I0327 23:45:48.357951   12258 request.go:601] Waited for 1.118821915s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Mar 27 23:45:48 running-upgrade-400000 kubelet[12258]: E0327 23:45:48.360780   12258 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-400000\" already exists" pod="kube-system/etcd-running-upgrade-400000"
	Mar 27 23:45:58 running-upgrade-400000 kubelet[12258]: I0327 23:45:58.307284   12258 topology_manager.go:200] "Topology Admit Handler"
	Mar 27 23:45:58 running-upgrade-400000 kubelet[12258]: I0327 23:45:58.375664   12258 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 27 23:45:58 running-upgrade-400000 kubelet[12258]: I0327 23:45:58.375674   12258 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbjsx\" (UniqueName: \"kubernetes.io/projected/49196067-6de3-4435-8cc4-07cf65ca849d-kube-api-access-mbjsx\") pod \"storage-provisioner\" (UID: \"49196067-6de3-4435-8cc4-07cf65ca849d\") " pod="kube-system/storage-provisioner"
	Mar 27 23:45:58 running-upgrade-400000 kubelet[12258]: I0327 23:45:58.375835   12258 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/49196067-6de3-4435-8cc4-07cf65ca849d-tmp\") pod \"storage-provisioner\" (UID: \"49196067-6de3-4435-8cc4-07cf65ca849d\") " pod="kube-system/storage-provisioner"
	Mar 27 23:45:58 running-upgrade-400000 kubelet[12258]: I0327 23:45:58.376034   12258 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 27 23:45:58 running-upgrade-400000 kubelet[12258]: E0327 23:45:58.479340   12258 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Mar 27 23:45:58 running-upgrade-400000 kubelet[12258]: E0327 23:45:58.479361   12258 projected.go:192] Error preparing data for projected volume kube-api-access-mbjsx for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Mar 27 23:45:58 running-upgrade-400000 kubelet[12258]: E0327 23:45:58.479398   12258 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/49196067-6de3-4435-8cc4-07cf65ca849d-kube-api-access-mbjsx podName:49196067-6de3-4435-8cc4-07cf65ca849d nodeName:}" failed. No retries permitted until 2024-03-27 23:45:58.979384178 +0000 UTC m=+12.860523570 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mbjsx" (UniqueName: "kubernetes.io/projected/49196067-6de3-4435-8cc4-07cf65ca849d-kube-api-access-mbjsx") pod "storage-provisioner" (UID: "49196067-6de3-4435-8cc4-07cf65ca849d") : configmap "kube-root-ca.crt" not found
	Mar 27 23:45:59 running-upgrade-400000 kubelet[12258]: I0327 23:45:59.290008   12258 topology_manager.go:200] "Topology Admit Handler"
	Mar 27 23:45:59 running-upgrade-400000 kubelet[12258]: I0327 23:45:59.346226   12258 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="68c4ecd84f5efc9cf79a719af5db50b9b06e495391f78e0cac1656261df8f1e5"
	Mar 27 23:45:59 running-upgrade-400000 kubelet[12258]: I0327 23:45:59.436177   12258 topology_manager.go:200] "Topology Admit Handler"
	Mar 27 23:45:59 running-upgrade-400000 kubelet[12258]: I0327 23:45:59.444855   12258 topology_manager.go:200] "Topology Admit Handler"
	Mar 27 23:45:59 running-upgrade-400000 kubelet[12258]: I0327 23:45:59.485639   12258 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3e486eb3-00b7-4346-af32-bb598cb3d408-kube-proxy\") pod \"kube-proxy-8w6w6\" (UID: \"3e486eb3-00b7-4346-af32-bb598cb3d408\") " pod="kube-system/kube-proxy-8w6w6"
	Mar 27 23:45:59 running-upgrade-400000 kubelet[12258]: I0327 23:45:59.485702   12258 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e486eb3-00b7-4346-af32-bb598cb3d408-xtables-lock\") pod \"kube-proxy-8w6w6\" (UID: \"3e486eb3-00b7-4346-af32-bb598cb3d408\") " pod="kube-system/kube-proxy-8w6w6"
	Mar 27 23:45:59 running-upgrade-400000 kubelet[12258]: I0327 23:45:59.485719   12258 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e486eb3-00b7-4346-af32-bb598cb3d408-lib-modules\") pod \"kube-proxy-8w6w6\" (UID: \"3e486eb3-00b7-4346-af32-bb598cb3d408\") " pod="kube-system/kube-proxy-8w6w6"
	Mar 27 23:45:59 running-upgrade-400000 kubelet[12258]: I0327 23:45:59.485731   12258 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fsrd\" (UniqueName: \"kubernetes.io/projected/3e486eb3-00b7-4346-af32-bb598cb3d408-kube-api-access-7fsrd\") pod \"kube-proxy-8w6w6\" (UID: \"3e486eb3-00b7-4346-af32-bb598cb3d408\") " pod="kube-system/kube-proxy-8w6w6"
	Mar 27 23:45:59 running-upgrade-400000 kubelet[12258]: I0327 23:45:59.586550   12258 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/332ee383-1872-4aad-a39e-5200e9aa6976-config-volume\") pod \"coredns-6d4b75cb6d-r48pb\" (UID: \"332ee383-1872-4aad-a39e-5200e9aa6976\") " pod="kube-system/coredns-6d4b75cb6d-r48pb"
	Mar 27 23:45:59 running-upgrade-400000 kubelet[12258]: I0327 23:45:59.586578   12258 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9f4h\" (UniqueName: \"kubernetes.io/projected/332ee383-1872-4aad-a39e-5200e9aa6976-kube-api-access-b9f4h\") pod \"coredns-6d4b75cb6d-r48pb\" (UID: \"332ee383-1872-4aad-a39e-5200e9aa6976\") " pod="kube-system/coredns-6d4b75cb6d-r48pb"
	Mar 27 23:45:59 running-upgrade-400000 kubelet[12258]: I0327 23:45:59.586591   12258 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl2rn\" (UniqueName: \"kubernetes.io/projected/ddcd519f-e792-4f0b-ad5d-d11ae9c6fd77-kube-api-access-tl2rn\") pod \"coredns-6d4b75cb6d-67zjv\" (UID: \"ddcd519f-e792-4f0b-ad5d-d11ae9c6fd77\") " pod="kube-system/coredns-6d4b75cb6d-67zjv"
	Mar 27 23:45:59 running-upgrade-400000 kubelet[12258]: I0327 23:45:59.586627   12258 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ddcd519f-e792-4f0b-ad5d-d11ae9c6fd77-config-volume\") pod \"coredns-6d4b75cb6d-67zjv\" (UID: \"ddcd519f-e792-4f0b-ad5d-d11ae9c6fd77\") " pod="kube-system/coredns-6d4b75cb6d-67zjv"
	Mar 27 23:49:47 running-upgrade-400000 kubelet[12258]: I0327 23:49:47.640176   12258 scope.go:110] "RemoveContainer" containerID="b5329cb283320d9d2d86b3c0028066b891231a2080609743bf8a880751da7a68"
	Mar 27 23:49:47 running-upgrade-400000 kubelet[12258]: I0327 23:49:47.662092   12258 scope.go:110] "RemoveContainer" containerID="a1738554adcaa9b177ec41cd3b8211039985ea6227aadabe6f3cea21045cc8c8"
	
	
	==> storage-provisioner [2c4ccf3e69ae] <==
	I0327 23:45:59.397817       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0327 23:45:59.402835       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0327 23:45:59.402897       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0327 23:45:59.405764       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0327 23:45:59.405878       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-400000_2b6e2d25-af71-4158-b3b8-4ecf432bcd75!
	I0327 23:45:59.406131       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"98ad061f-3a73-422e-ae6c-70ec26e25e34", APIVersion:"v1", ResourceVersion:"359", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-400000_2b6e2d25-af71-4158-b3b8-4ecf432bcd75 became leader
	I0327 23:45:59.507703       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-400000_2b6e2d25-af71-4158-b3b8-4ecf432bcd75!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-400000 -n running-upgrade-400000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-400000 -n running-upgrade-400000: exit status 2 (15.7285495s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-400000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-400000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-400000
--- FAIL: TestRunningBinaryUpgrade (620.20s)

                                                
                                    
x
+
TestKubernetesUpgrade (17.5s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-236000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-236000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.949287083s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-236000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-236000" primary control-plane node in "kubernetes-upgrade-236000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-236000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:42:59.327312    8873 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:42:59.327454    8873 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:42:59.327458    8873 out.go:304] Setting ErrFile to fd 2...
	I0327 16:42:59.327460    8873 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:42:59.327600    8873 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:42:59.328612    8873 out.go:298] Setting JSON to false
	I0327 16:42:59.346186    8873 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6150,"bootTime":1711576829,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 16:42:59.346249    8873 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 16:42:59.352267    8873 out.go:177] * [kubernetes-upgrade-236000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 16:42:59.360222    8873 out.go:177]   - MINIKUBE_LOCATION=18485
	I0327 16:42:59.360294    8873 notify.go:220] Checking for updates...
	I0327 16:42:59.368150    8873 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:42:59.371169    8873 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 16:42:59.374171    8873 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 16:42:59.377182    8873 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	I0327 16:42:59.380212    8873 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 16:42:59.381908    8873 config.go:182] Loaded profile config "multinode-266000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:42:59.381975    8873 config.go:182] Loaded profile config "running-upgrade-400000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 16:42:59.382028    8873 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 16:42:59.386153    8873 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 16:42:59.390615    8873 start.go:297] selected driver: qemu2
	I0327 16:42:59.390620    8873 start.go:901] validating driver "qemu2" against <nil>
	I0327 16:42:59.390624    8873 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 16:42:59.392879    8873 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 16:42:59.396173    8873 out.go:177] * Automatically selected the socket_vmnet network
	I0327 16:42:59.399260    8873 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0327 16:42:59.399290    8873 cni.go:84] Creating CNI manager for ""
	I0327 16:42:59.399296    8873 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0327 16:42:59.399318    8873 start.go:340] cluster config:
	{Name:kubernetes-upgrade-236000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-236000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:42:59.403789    8873 iso.go:125] acquiring lock: {Name:mk26a7f41e004242919e6d5076ec0f9645f820ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:42:59.411218    8873 out.go:177] * Starting "kubernetes-upgrade-236000" primary control-plane node in "kubernetes-upgrade-236000" cluster
	I0327 16:42:59.415180    8873 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0327 16:42:59.415194    8873 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0327 16:42:59.415199    8873 cache.go:56] Caching tarball of preloaded images
	I0327 16:42:59.415250    8873 preload.go:173] Found /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 16:42:59.415255    8873 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0327 16:42:59.415309    8873 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/kubernetes-upgrade-236000/config.json ...
	I0327 16:42:59.415319    8873 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/kubernetes-upgrade-236000/config.json: {Name:mkda71e3f21765d5da3135d76b24ca45c64abd9d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 16:42:59.415528    8873 start.go:360] acquireMachinesLock for kubernetes-upgrade-236000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:42:59.415562    8873 start.go:364] duration metric: took 26.375µs to acquireMachinesLock for "kubernetes-upgrade-236000"
	I0327 16:42:59.415575    8873 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-236000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-236000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 16:42:59.415609    8873 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 16:42:59.424220    8873 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 16:42:59.447965    8873 start.go:159] libmachine.API.Create for "kubernetes-upgrade-236000" (driver="qemu2")
	I0327 16:42:59.447995    8873 client.go:168] LocalClient.Create starting
	I0327 16:42:59.448071    8873 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem
	I0327 16:42:59.448106    8873 main.go:141] libmachine: Decoding PEM data...
	I0327 16:42:59.448117    8873 main.go:141] libmachine: Parsing certificate...
	I0327 16:42:59.448159    8873 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem
	I0327 16:42:59.448184    8873 main.go:141] libmachine: Decoding PEM data...
	I0327 16:42:59.448190    8873 main.go:141] libmachine: Parsing certificate...
	I0327 16:42:59.448531    8873 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18485-6511/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0327 16:42:59.595091    8873 main.go:141] libmachine: Creating SSH key...
	I0327 16:42:59.715506    8873 main.go:141] libmachine: Creating Disk image...
	I0327 16:42:59.715513    8873 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 16:42:59.715692    8873 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kubernetes-upgrade-236000/disk.qcow2.raw /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kubernetes-upgrade-236000/disk.qcow2
	I0327 16:42:59.728777    8873 main.go:141] libmachine: STDOUT: 
	I0327 16:42:59.728799    8873 main.go:141] libmachine: STDERR: 
	I0327 16:42:59.728869    8873 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kubernetes-upgrade-236000/disk.qcow2 +20000M
	I0327 16:42:59.739944    8873 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 16:42:59.739971    8873 main.go:141] libmachine: STDERR: 
	I0327 16:42:59.739991    8873 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kubernetes-upgrade-236000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kubernetes-upgrade-236000/disk.qcow2
	I0327 16:42:59.739996    8873 main.go:141] libmachine: Starting QEMU VM...
	I0327 16:42:59.740027    8873 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kubernetes-upgrade-236000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kubernetes-upgrade-236000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kubernetes-upgrade-236000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:8c:59:6e:8f:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kubernetes-upgrade-236000/disk.qcow2
	I0327 16:42:59.741798    8873 main.go:141] libmachine: STDOUT: 
	I0327 16:42:59.741813    8873 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:42:59.741833    8873 client.go:171] duration metric: took 293.8395ms to LocalClient.Create
	I0327 16:43:01.743997    8873 start.go:128] duration metric: took 2.3284245s to createHost
	I0327 16:43:01.744085    8873 start.go:83] releasing machines lock for "kubernetes-upgrade-236000", held for 2.328580167s
	W0327 16:43:01.744224    8873 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:43:01.762406    8873 out.go:177] * Deleting "kubernetes-upgrade-236000" in qemu2 ...
	W0327 16:43:01.786547    8873 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:43:01.786775    8873 start.go:728] Will try again in 5 seconds ...
	I0327 16:43:06.788151    8873 start.go:360] acquireMachinesLock for kubernetes-upgrade-236000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:43:06.788646    8873 start.go:364] duration metric: took 392.75µs to acquireMachinesLock for "kubernetes-upgrade-236000"
	I0327 16:43:06.788736    8873 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-236000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-236000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 16:43:06.788999    8873 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 16:43:06.798669    8873 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 16:43:06.850037    8873 start.go:159] libmachine.API.Create for "kubernetes-upgrade-236000" (driver="qemu2")
	I0327 16:43:06.850097    8873 client.go:168] LocalClient.Create starting
	I0327 16:43:06.850233    8873 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem
	I0327 16:43:06.850305    8873 main.go:141] libmachine: Decoding PEM data...
	I0327 16:43:06.850321    8873 main.go:141] libmachine: Parsing certificate...
	I0327 16:43:06.850385    8873 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem
	I0327 16:43:06.850430    8873 main.go:141] libmachine: Decoding PEM data...
	I0327 16:43:06.850443    8873 main.go:141] libmachine: Parsing certificate...
	I0327 16:43:06.851021    8873 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18485-6511/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0327 16:43:07.001480    8873 main.go:141] libmachine: Creating SSH key...
	I0327 16:43:07.180591    8873 main.go:141] libmachine: Creating Disk image...
	I0327 16:43:07.180610    8873 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 16:43:07.180796    8873 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kubernetes-upgrade-236000/disk.qcow2.raw /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kubernetes-upgrade-236000/disk.qcow2
	I0327 16:43:07.193466    8873 main.go:141] libmachine: STDOUT: 
	I0327 16:43:07.193486    8873 main.go:141] libmachine: STDERR: 
	I0327 16:43:07.193549    8873 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kubernetes-upgrade-236000/disk.qcow2 +20000M
	I0327 16:43:07.204546    8873 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 16:43:07.204567    8873 main.go:141] libmachine: STDERR: 
	I0327 16:43:07.204579    8873 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kubernetes-upgrade-236000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kubernetes-upgrade-236000/disk.qcow2
	I0327 16:43:07.204584    8873 main.go:141] libmachine: Starting QEMU VM...
	I0327 16:43:07.204617    8873 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kubernetes-upgrade-236000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kubernetes-upgrade-236000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kubernetes-upgrade-236000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:23:44:ad:79:2f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kubernetes-upgrade-236000/disk.qcow2
	I0327 16:43:07.206369    8873 main.go:141] libmachine: STDOUT: 
	I0327 16:43:07.206385    8873 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:43:07.206406    8873 client.go:171] duration metric: took 356.314625ms to LocalClient.Create
	I0327 16:43:09.208449    8873 start.go:128] duration metric: took 2.4194935s to createHost
	I0327 16:43:09.208482    8873 start.go:83] releasing machines lock for "kubernetes-upgrade-236000", held for 2.41988675s
	W0327 16:43:09.208654    8873 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-236000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-236000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:43:09.220076    8873 out.go:177] 
	W0327 16:43:09.224106    8873 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 16:43:09.224124    8873 out.go:239] * 
	* 
	W0327 16:43:09.225436    8873 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 16:43:09.236067    8873 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-236000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-236000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-236000: (2.143035792s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-236000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-236000 status --format={{.Host}}: exit status 7 (60.590875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-236000 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-236000 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.170942s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-236000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-236000" primary control-plane node in "kubernetes-upgrade-236000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-236000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-236000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:43:11.483835    8901 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:43:11.483995    8901 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:43:11.483999    8901 out.go:304] Setting ErrFile to fd 2...
	I0327 16:43:11.484001    8901 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:43:11.484129    8901 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:43:11.485189    8901 out.go:298] Setting JSON to false
	I0327 16:43:11.501567    8901 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6162,"bootTime":1711576829,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 16:43:11.501631    8901 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 16:43:11.506536    8901 out.go:177] * [kubernetes-upgrade-236000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 16:43:11.512409    8901 out.go:177]   - MINIKUBE_LOCATION=18485
	I0327 16:43:11.516401    8901 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:43:11.512464    8901 notify.go:220] Checking for updates...
	I0327 16:43:11.522343    8901 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 16:43:11.525419    8901 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 16:43:11.528411    8901 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	I0327 16:43:11.531429    8901 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 16:43:11.534751    8901 config.go:182] Loaded profile config "kubernetes-upgrade-236000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0327 16:43:11.535021    8901 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 16:43:11.539348    8901 out.go:177] * Using the qemu2 driver based on existing profile
	I0327 16:43:11.546404    8901 start.go:297] selected driver: qemu2
	I0327 16:43:11.546411    8901 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-236000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-236000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:43:11.546483    8901 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 16:43:11.548801    8901 cni.go:84] Creating CNI manager for ""
	I0327 16:43:11.548820    8901 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 16:43:11.548853    8901 start.go:340] cluster config:
	{Name:kubernetes-upgrade-236000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:kubernetes-upgrade-236000 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:43:11.553398    8901 iso.go:125] acquiring lock: {Name:mk26a7f41e004242919e6d5076ec0f9645f820ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:43:11.560393    8901 out.go:177] * Starting "kubernetes-upgrade-236000" primary control-plane node in "kubernetes-upgrade-236000" cluster
	I0327 16:43:11.564413    8901 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime docker
	I0327 16:43:11.564434    8901 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0327 16:43:11.564443    8901 cache.go:56] Caching tarball of preloaded images
	I0327 16:43:11.564508    8901 preload.go:173] Found /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 16:43:11.564514    8901 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-beta.0 on docker
	I0327 16:43:11.564586    8901 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/kubernetes-upgrade-236000/config.json ...
	I0327 16:43:11.565064    8901 start.go:360] acquireMachinesLock for kubernetes-upgrade-236000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:43:11.565091    8901 start.go:364] duration metric: took 21.625µs to acquireMachinesLock for "kubernetes-upgrade-236000"
	I0327 16:43:11.565100    8901 start.go:96] Skipping create...Using existing machine configuration
	I0327 16:43:11.565107    8901 fix.go:54] fixHost starting: 
	I0327 16:43:11.565223    8901 fix.go:112] recreateIfNeeded on kubernetes-upgrade-236000: state=Stopped err=<nil>
	W0327 16:43:11.565231    8901 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 16:43:11.573409    8901 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-236000" ...
	I0327 16:43:11.577390    8901 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kubernetes-upgrade-236000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kubernetes-upgrade-236000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kubernetes-upgrade-236000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:23:44:ad:79:2f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kubernetes-upgrade-236000/disk.qcow2
	I0327 16:43:11.579506    8901 main.go:141] libmachine: STDOUT: 
	I0327 16:43:11.579527    8901 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:43:11.579557    8901 fix.go:56] duration metric: took 14.450333ms for fixHost
	I0327 16:43:11.579563    8901 start.go:83] releasing machines lock for "kubernetes-upgrade-236000", held for 14.467625ms
	W0327 16:43:11.579569    8901 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 16:43:11.579600    8901 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:43:11.579606    8901 start.go:728] Will try again in 5 seconds ...
	I0327 16:43:16.579604    8901 start.go:360] acquireMachinesLock for kubernetes-upgrade-236000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:43:16.579691    8901 start.go:364] duration metric: took 67.208µs to acquireMachinesLock for "kubernetes-upgrade-236000"
	I0327 16:43:16.579709    8901 start.go:96] Skipping create...Using existing machine configuration
	I0327 16:43:16.579714    8901 fix.go:54] fixHost starting: 
	I0327 16:43:16.579851    8901 fix.go:112] recreateIfNeeded on kubernetes-upgrade-236000: state=Stopped err=<nil>
	W0327 16:43:16.579856    8901 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 16:43:16.584007    8901 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-236000" ...
	I0327 16:43:16.590060    8901 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kubernetes-upgrade-236000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kubernetes-upgrade-236000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kubernetes-upgrade-236000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:23:44:ad:79:2f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kubernetes-upgrade-236000/disk.qcow2
	I0327 16:43:16.592410    8901 main.go:141] libmachine: STDOUT: 
	I0327 16:43:16.592429    8901 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:43:16.592449    8901 fix.go:56] duration metric: took 12.735167ms for fixHost
	I0327 16:43:16.592454    8901 start.go:83] releasing machines lock for "kubernetes-upgrade-236000", held for 12.756833ms
	W0327 16:43:16.592498    8901 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-236000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-236000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:43:16.599995    8901 out.go:177] 
	W0327 16:43:16.603081    8901 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 16:43:16.603101    8901 out.go:239] * 
	* 
	W0327 16:43:16.603627    8901 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 16:43:16.614013    8901 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-236000 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-236000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-236000 version --output=json: exit status 1 (31.799292ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-236000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-03-27 16:43:16.655562 -0700 PDT m=+1018.145529501
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-236000 -n kubernetes-upgrade-236000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-236000 -n kubernetes-upgrade-236000: exit status 7 (34.462041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-236000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-236000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-236000
--- FAIL: TestKubernetesUpgrade (17.50s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.52s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.0-beta.0 on darwin (arm64)
- MINIKUBE_LOCATION=18485
- KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1179977409/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.52s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.18s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.0-beta.0 on darwin (arm64)
- MINIKUBE_LOCATION=18485
- KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2395226508/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.18s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (586.77s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.20164522 start -p stopped-upgrade-017000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.20164522 start -p stopped-upgrade-017000 --memory=2200 --vm-driver=qemu2 : (44.567628s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.20164522 -p stopped-upgrade-017000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.20164522 -p stopped-upgrade-017000 stop: (12.11646775s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-017000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-017000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m50.031441625s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-017000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-017000" primary control-plane node in "stopped-upgrade-017000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-017000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:44:18.451832    8959 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:44:18.452013    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:44:18.452017    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:44:18.452020    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:44:18.452176    8959 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:44:18.453332    8959 out.go:298] Setting JSON to false
	I0327 16:44:18.471952    8959 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6229,"bootTime":1711576829,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 16:44:18.472038    8959 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 16:44:18.477052    8959 out.go:177] * [stopped-upgrade-017000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 16:44:18.485086    8959 out.go:177]   - MINIKUBE_LOCATION=18485
	I0327 16:44:18.489154    8959 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:44:18.485132    8959 notify.go:220] Checking for updates...
	I0327 16:44:18.494977    8959 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 16:44:18.498085    8959 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 16:44:18.499494    8959 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	I0327 16:44:18.503010    8959 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 16:44:18.506321    8959 config.go:182] Loaded profile config "stopped-upgrade-017000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 16:44:18.510052    8959 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0327 16:44:18.513013    8959 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 16:44:18.517063    8959 out.go:177] * Using the qemu2 driver based on existing profile
	I0327 16:44:18.523976    8959 start.go:297] selected driver: qemu2
	I0327 16:44:18.523983    8959 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-017000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51421 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-017000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0327 16:44:18.524033    8959 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 16:44:18.526734    8959 cni.go:84] Creating CNI manager for ""
	I0327 16:44:18.526756    8959 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 16:44:18.526788    8959 start.go:340] cluster config:
	{Name:stopped-upgrade-017000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51421 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-017000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0327 16:44:18.526856    8959 iso.go:125] acquiring lock: {Name:mk26a7f41e004242919e6d5076ec0f9645f820ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:44:18.535016    8959 out.go:177] * Starting "stopped-upgrade-017000" primary control-plane node in "stopped-upgrade-017000" cluster
	I0327 16:44:18.539024    8959 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0327 16:44:18.539040    8959 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0327 16:44:18.539053    8959 cache.go:56] Caching tarball of preloaded images
	I0327 16:44:18.539107    8959 preload.go:173] Found /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 16:44:18.539115    8959 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0327 16:44:18.539175    8959 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/stopped-upgrade-017000/config.json ...
	I0327 16:44:18.539744    8959 start.go:360] acquireMachinesLock for stopped-upgrade-017000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:44:18.539775    8959 start.go:364] duration metric: took 23.584µs to acquireMachinesLock for "stopped-upgrade-017000"
	I0327 16:44:18.539787    8959 start.go:96] Skipping create...Using existing machine configuration
	I0327 16:44:18.539793    8959 fix.go:54] fixHost starting: 
	I0327 16:44:18.539913    8959 fix.go:112] recreateIfNeeded on stopped-upgrade-017000: state=Stopped err=<nil>
	W0327 16:44:18.539922    8959 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 16:44:18.548044    8959 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-017000" ...
	I0327 16:44:18.552079    8959 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/stopped-upgrade-017000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/stopped-upgrade-017000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/stopped-upgrade-017000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51386-:22,hostfwd=tcp::51387-:2376,hostname=stopped-upgrade-017000 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/stopped-upgrade-017000/disk.qcow2
	I0327 16:44:18.600996    8959 main.go:141] libmachine: STDOUT: 
	I0327 16:44:18.601025    8959 main.go:141] libmachine: STDERR: 
	I0327 16:44:18.601032    8959 main.go:141] libmachine: Waiting for VM to start (ssh -p 51386 docker@127.0.0.1)...
	I0327 16:44:38.833952    8959 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/stopped-upgrade-017000/config.json ...
	I0327 16:44:38.834872    8959 machine.go:94] provisionDockerMachine start ...
	I0327 16:44:38.835096    8959 main.go:141] libmachine: Using SSH client type: native
	I0327 16:44:38.835514    8959 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028a5bf0] 0x1028a8450 <nil>  [] 0s} localhost 51386 <nil> <nil>}
	I0327 16:44:38.835528    8959 main.go:141] libmachine: About to run SSH command:
	hostname
	I0327 16:44:38.919638    8959 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0327 16:44:38.919684    8959 buildroot.go:166] provisioning hostname "stopped-upgrade-017000"
	I0327 16:44:38.919818    8959 main.go:141] libmachine: Using SSH client type: native
	I0327 16:44:38.920051    8959 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028a5bf0] 0x1028a8450 <nil>  [] 0s} localhost 51386 <nil> <nil>}
	I0327 16:44:38.920062    8959 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-017000 && echo "stopped-upgrade-017000" | sudo tee /etc/hostname
	I0327 16:44:38.993932    8959 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-017000
	
	I0327 16:44:38.994000    8959 main.go:141] libmachine: Using SSH client type: native
	I0327 16:44:38.994145    8959 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028a5bf0] 0x1028a8450 <nil>  [] 0s} localhost 51386 <nil> <nil>}
	I0327 16:44:38.994159    8959 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-017000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-017000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-017000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0327 16:44:39.062354    8959 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0327 16:44:39.062366    8959 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18485-6511/.minikube CaCertPath:/Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18485-6511/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18485-6511/.minikube}
	I0327 16:44:39.062393    8959 buildroot.go:174] setting up certificates
	I0327 16:44:39.062401    8959 provision.go:84] configureAuth start
	I0327 16:44:39.062410    8959 provision.go:143] copyHostCerts
	I0327 16:44:39.062491    8959 exec_runner.go:144] found /Users/jenkins/minikube-integration/18485-6511/.minikube/cert.pem, removing ...
	I0327 16:44:39.062499    8959 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18485-6511/.minikube/cert.pem
	I0327 16:44:39.062632    8959 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18485-6511/.minikube/cert.pem (1123 bytes)
	I0327 16:44:39.062871    8959 exec_runner.go:144] found /Users/jenkins/minikube-integration/18485-6511/.minikube/key.pem, removing ...
	I0327 16:44:39.062880    8959 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18485-6511/.minikube/key.pem
	I0327 16:44:39.062983    8959 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18485-6511/.minikube/key.pem (1675 bytes)
	I0327 16:44:39.063141    8959 exec_runner.go:144] found /Users/jenkins/minikube-integration/18485-6511/.minikube/ca.pem, removing ...
	I0327 16:44:39.063147    8959 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18485-6511/.minikube/ca.pem
	I0327 16:44:39.063222    8959 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18485-6511/.minikube/ca.pem (1078 bytes)
	I0327 16:44:39.063343    8959 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-017000 san=[127.0.0.1 localhost minikube stopped-upgrade-017000]
	I0327 16:44:39.333840    8959 provision.go:177] copyRemoteCerts
	I0327 16:44:39.333893    8959 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0327 16:44:39.333901    8959 sshutil.go:53] new ssh client: &{IP:localhost Port:51386 SSHKeyPath:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/stopped-upgrade-017000/id_rsa Username:docker}
	I0327 16:44:39.368601    8959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0327 16:44:39.375203    8959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0327 16:44:39.381979    8959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0327 16:44:39.389185    8959 provision.go:87] duration metric: took 326.785292ms to configureAuth
	I0327 16:44:39.389195    8959 buildroot.go:189] setting minikube options for container-runtime
	I0327 16:44:39.389290    8959 config.go:182] Loaded profile config "stopped-upgrade-017000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 16:44:39.389324    8959 main.go:141] libmachine: Using SSH client type: native
	I0327 16:44:39.389412    8959 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028a5bf0] 0x1028a8450 <nil>  [] 0s} localhost 51386 <nil> <nil>}
	I0327 16:44:39.389417    8959 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0327 16:44:39.447062    8959 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0327 16:44:39.447069    8959 buildroot.go:70] root file system type: tmpfs
	I0327 16:44:39.447132    8959 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0327 16:44:39.447172    8959 main.go:141] libmachine: Using SSH client type: native
	I0327 16:44:39.447267    8959 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028a5bf0] 0x1028a8450 <nil>  [] 0s} localhost 51386 <nil> <nil>}
	I0327 16:44:39.447302    8959 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0327 16:44:39.510334    8959 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0327 16:44:39.510380    8959 main.go:141] libmachine: Using SSH client type: native
	I0327 16:44:39.510478    8959 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028a5bf0] 0x1028a8450 <nil>  [] 0s} localhost 51386 <nil> <nil>}
	I0327 16:44:39.510486    8959 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0327 16:44:39.862254    8959 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0327 16:44:39.862269    8959 machine.go:97] duration metric: took 1.027412291s to provisionDockerMachine
	I0327 16:44:39.862276    8959 start.go:293] postStartSetup for "stopped-upgrade-017000" (driver="qemu2")
	I0327 16:44:39.862283    8959 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0327 16:44:39.862343    8959 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0327 16:44:39.862353    8959 sshutil.go:53] new ssh client: &{IP:localhost Port:51386 SSHKeyPath:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/stopped-upgrade-017000/id_rsa Username:docker}
	I0327 16:44:39.894697    8959 ssh_runner.go:195] Run: cat /etc/os-release
	I0327 16:44:39.896555    8959 info.go:137] Remote host: Buildroot 2021.02.12
	I0327 16:44:39.896563    8959 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18485-6511/.minikube/addons for local assets ...
	I0327 16:44:39.896637    8959 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18485-6511/.minikube/files for local assets ...
	I0327 16:44:39.896749    8959 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18485-6511/.minikube/files/etc/ssl/certs/69262.pem -> 69262.pem in /etc/ssl/certs
	I0327 16:44:39.896873    8959 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0327 16:44:39.899601    8959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/files/etc/ssl/certs/69262.pem --> /etc/ssl/certs/69262.pem (1708 bytes)
	I0327 16:44:39.907524    8959 start.go:296] duration metric: took 45.240625ms for postStartSetup
	I0327 16:44:39.907544    8959 fix.go:56] duration metric: took 21.368390167s for fixHost
	I0327 16:44:39.907613    8959 main.go:141] libmachine: Using SSH client type: native
	I0327 16:44:39.907719    8959 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028a5bf0] 0x1028a8450 <nil>  [] 0s} localhost 51386 <nil> <nil>}
	I0327 16:44:39.907725    8959 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0327 16:44:39.965053    8959 main.go:141] libmachine: SSH cmd err, output: <nil>: 1711583080.295743379
	
	I0327 16:44:39.965064    8959 fix.go:216] guest clock: 1711583080.295743379
	I0327 16:44:39.965068    8959 fix.go:229] Guest: 2024-03-27 16:44:40.295743379 -0700 PDT Remote: 2024-03-27 16:44:39.907546 -0700 PDT m=+21.490555709 (delta=388.197379ms)
	I0327 16:44:39.965081    8959 fix.go:200] guest clock delta is within tolerance: 388.197379ms
	I0327 16:44:39.965083    8959 start.go:83] releasing machines lock for "stopped-upgrade-017000", held for 21.425942916s
	I0327 16:44:39.965148    8959 ssh_runner.go:195] Run: cat /version.json
	I0327 16:44:39.965158    8959 sshutil.go:53] new ssh client: &{IP:localhost Port:51386 SSHKeyPath:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/stopped-upgrade-017000/id_rsa Username:docker}
	I0327 16:44:39.965148    8959 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0327 16:44:39.965192    8959 sshutil.go:53] new ssh client: &{IP:localhost Port:51386 SSHKeyPath:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/stopped-upgrade-017000/id_rsa Username:docker}
	W0327 16:44:39.965746    8959 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51386: connect: connection refused
	I0327 16:44:39.965768    8959 retry.go:31] will retry after 180.184309ms: dial tcp [::1]:51386: connect: connection refused
	W0327 16:44:40.186285    8959 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0327 16:44:40.186384    8959 ssh_runner.go:195] Run: systemctl --version
	I0327 16:44:40.189720    8959 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0327 16:44:40.192394    8959 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0327 16:44:40.192450    8959 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0327 16:44:40.197009    8959 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0327 16:44:40.203606    8959 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0327 16:44:40.203623    8959 start.go:494] detecting cgroup driver to use...
	I0327 16:44:40.203713    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0327 16:44:40.212835    8959 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0327 16:44:40.216677    8959 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0327 16:44:40.220053    8959 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0327 16:44:40.220081    8959 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0327 16:44:40.223092    8959 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0327 16:44:40.225931    8959 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0327 16:44:40.229062    8959 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0327 16:44:40.232241    8959 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0327 16:44:40.235128    8959 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0327 16:44:40.238023    8959 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0327 16:44:40.241464    8959 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0327 16:44:40.244986    8959 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0327 16:44:40.247719    8959 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0327 16:44:40.250222    8959 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 16:44:40.332312    8959 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0327 16:44:40.339420    8959 start.go:494] detecting cgroup driver to use...
	I0327 16:44:40.339498    8959 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0327 16:44:40.344426    8959 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0327 16:44:40.349467    8959 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0327 16:44:40.360574    8959 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0327 16:44:40.365142    8959 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0327 16:44:40.369896    8959 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0327 16:44:40.427776    8959 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0327 16:44:40.432517    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0327 16:44:40.438052    8959 ssh_runner.go:195] Run: which cri-dockerd
	I0327 16:44:40.439356    8959 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0327 16:44:40.442084    8959 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0327 16:44:40.447408    8959 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0327 16:44:40.517399    8959 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0327 16:44:40.583008    8959 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0327 16:44:40.583069    8959 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0327 16:44:40.588469    8959 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 16:44:40.654887    8959 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0327 16:44:41.795098    8959 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.140220167s)
	I0327 16:44:41.795161    8959 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0327 16:44:41.799597    8959 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0327 16:44:41.806031    8959 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0327 16:44:41.811023    8959 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0327 16:44:41.880272    8959 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0327 16:44:41.954137    8959 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 16:44:42.035292    8959 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0327 16:44:42.040829    8959 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0327 16:44:42.045200    8959 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 16:44:42.123882    8959 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0327 16:44:42.162871    8959 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0327 16:44:42.162957    8959 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0327 16:44:42.166584    8959 start.go:562] Will wait 60s for crictl version
	I0327 16:44:42.166638    8959 ssh_runner.go:195] Run: which crictl
	I0327 16:44:42.167879    8959 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0327 16:44:42.182888    8959 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0327 16:44:42.182969    8959 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0327 16:44:42.199767    8959 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0327 16:44:42.220078    8959 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0327 16:44:42.220190    8959 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0327 16:44:42.221432    8959 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0327 16:44:42.225415    8959 kubeadm.go:877] updating cluster {Name:stopped-upgrade-017000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51421 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-017000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0327 16:44:42.225468    8959 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0327 16:44:42.225510    8959 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0327 16:44:42.237521    8959 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0327 16:44:42.237538    8959 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0327 16:44:42.237594    8959 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0327 16:44:42.241353    8959 ssh_runner.go:195] Run: which lz4
	I0327 16:44:42.242704    8959 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0327 16:44:42.243947    8959 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0327 16:44:42.243957    8959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0327 16:44:42.976490    8959 docker.go:649] duration metric: took 733.839208ms to copy over tarball
	I0327 16:44:42.976550    8959 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0327 16:44:44.171809    8959 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.195280458s)
	I0327 16:44:44.171822    8959 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0327 16:44:44.187305    8959 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0327 16:44:44.190077    8959 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0327 16:44:44.195267    8959 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 16:44:44.271281    8959 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0327 16:44:45.855638    8959 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.584384583s)
	I0327 16:44:45.855730    8959 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0327 16:44:45.867305    8959 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0327 16:44:45.867315    8959 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0327 16:44:45.867321    8959 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0327 16:44:45.877073    8959 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0327 16:44:45.877211    8959 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0327 16:44:45.877337    8959 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0327 16:44:45.877403    8959 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0327 16:44:45.877458    8959 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0327 16:44:45.877626    8959 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0327 16:44:45.877634    8959 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 16:44:45.877927    8959 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0327 16:44:45.887642    8959 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0327 16:44:45.887718    8959 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0327 16:44:45.887777    8959 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 16:44:45.887839    8959 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0327 16:44:45.887942    8959 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0327 16:44:45.888012    8959 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0327 16:44:45.888173    8959 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0327 16:44:45.888354    8959 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0327 16:44:47.888858    8959 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0327 16:44:47.926996    8959 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0327 16:44:47.927051    8959 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0327 16:44:47.927146    8959 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0327 16:44:47.940997    8959 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0327 16:44:47.949909    8959 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0327 16:44:47.961764    8959 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0327 16:44:47.961801    8959 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0327 16:44:47.961865    8959 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0327 16:44:47.974282    8959 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	W0327 16:44:47.992103    8959 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0327 16:44:47.992221    8959 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0327 16:44:47.992245    8959 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0327 16:44:48.004558    8959 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0327 16:44:48.006260    8959 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0327 16:44:48.006464    8959 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0327 16:44:48.006482    8959 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0327 16:44:48.006495    8959 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0327 16:44:48.006519    8959 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0327 16:44:48.006483    8959 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0327 16:44:48.006604    8959 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0327 16:44:48.014206    8959 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0327 16:44:48.017145    8959 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0327 16:44:48.017165    8959 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0327 16:44:48.017206    8959 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0327 16:44:48.029505    8959 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0327 16:44:48.029527    8959 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0327 16:44:48.029592    8959 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0327 16:44:48.037182    8959 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0327 16:44:48.039748    8959 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0327 16:44:48.039847    8959 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0327 16:44:48.045309    8959 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0327 16:44:48.045332    8959 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0327 16:44:48.045391    8959 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0327 16:44:48.048405    8959 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0327 16:44:48.048504    8959 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0327 16:44:48.055411    8959 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0327 16:44:48.055449    8959 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0327 16:44:48.055462    8959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0327 16:44:48.070704    8959 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0327 16:44:48.070758    8959 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0327 16:44:48.070773    8959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0327 16:44:48.089676    8959 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0327 16:44:48.089690    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0327 16:44:48.125799    8959 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0327 16:44:48.125821    8959 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0327 16:44:48.125834    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0327 16:44:48.162138    8959 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	W0327 16:44:48.460401    8959 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0327 16:44:48.460598    8959 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 16:44:48.475936    8959 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0327 16:44:48.475968    8959 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 16:44:48.476029    8959 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 16:44:48.491957    8959 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0327 16:44:48.492069    8959 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0327 16:44:48.493491    8959 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0327 16:44:48.493503    8959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0327 16:44:48.516557    8959 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0327 16:44:48.516570    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0327 16:44:48.757885    8959 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0327 16:44:48.757924    8959 cache_images.go:92] duration metric: took 2.890682292s to LoadCachedImages
	W0327 16:44:48.757964    8959 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1: no such file or directory
	I0327 16:44:48.757972    8959 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0327 16:44:48.758017    8959 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-017000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-017000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0327 16:44:48.758076    8959 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0327 16:44:48.775574    8959 cni.go:84] Creating CNI manager for ""
	I0327 16:44:48.775586    8959 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 16:44:48.775591    8959 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0327 16:44:48.775599    8959 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-017000 NodeName:stopped-upgrade-017000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0327 16:44:48.775664    8959 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-017000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0327 16:44:48.775721    8959 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0327 16:44:48.778546    8959 binaries.go:44] Found k8s binaries, skipping transfer
	I0327 16:44:48.778577    8959 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0327 16:44:48.781443    8959 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0327 16:44:48.786738    8959 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0327 16:44:48.791488    8959 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0327 16:44:48.796779    8959 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0327 16:44:48.798114    8959 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0327 16:44:48.801671    8959 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 16:44:48.883997    8959 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 16:44:48.889091    8959 certs.go:68] Setting up /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/stopped-upgrade-017000 for IP: 10.0.2.15
	I0327 16:44:48.889099    8959 certs.go:194] generating shared ca certs ...
	I0327 16:44:48.889109    8959 certs.go:226] acquiring lock for ca certs: {Name:mkc9ab23ce08863badc46de64236358969dc1820 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 16:44:48.889265    8959 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18485-6511/.minikube/ca.key
	I0327 16:44:48.889985    8959 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18485-6511/.minikube/proxy-client-ca.key
	I0327 16:44:48.889998    8959 certs.go:256] generating profile certs ...
	I0327 16:44:48.890212    8959 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/stopped-upgrade-017000/client.key
	I0327 16:44:48.890232    8959 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/stopped-upgrade-017000/apiserver.key.8f6b5052
	I0327 16:44:48.890242    8959 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/stopped-upgrade-017000/apiserver.crt.8f6b5052 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0327 16:44:49.052840    8959 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/stopped-upgrade-017000/apiserver.crt.8f6b5052 ...
	I0327 16:44:49.052854    8959 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/stopped-upgrade-017000/apiserver.crt.8f6b5052: {Name:mk8d7707cb630a39abbe89752f9a5ea56e816c47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 16:44:49.053162    8959 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/stopped-upgrade-017000/apiserver.key.8f6b5052 ...
	I0327 16:44:49.053175    8959 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/stopped-upgrade-017000/apiserver.key.8f6b5052: {Name:mk3561b92d4c8a3b5e6623cdb8994719c866fa1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 16:44:49.053322    8959 certs.go:381] copying /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/stopped-upgrade-017000/apiserver.crt.8f6b5052 -> /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/stopped-upgrade-017000/apiserver.crt
	I0327 16:44:49.053904    8959 certs.go:385] copying /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/stopped-upgrade-017000/apiserver.key.8f6b5052 -> /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/stopped-upgrade-017000/apiserver.key
	I0327 16:44:49.054263    8959 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/stopped-upgrade-017000/proxy-client.key
	I0327 16:44:49.054445    8959 certs.go:484] found cert: /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/6926.pem (1338 bytes)
	W0327 16:44:49.054664    8959 certs.go:480] ignoring /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/6926_empty.pem, impossibly tiny 0 bytes
	I0327 16:44:49.054673    8959 certs.go:484] found cert: /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca-key.pem (1679 bytes)
	I0327 16:44:49.054700    8959 certs.go:484] found cert: /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem (1078 bytes)
	I0327 16:44:49.054720    8959 certs.go:484] found cert: /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem (1123 bytes)
	I0327 16:44:49.054738    8959 certs.go:484] found cert: /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/key.pem (1675 bytes)
	I0327 16:44:49.054777    8959 certs.go:484] found cert: /Users/jenkins/minikube-integration/18485-6511/.minikube/files/etc/ssl/certs/69262.pem (1708 bytes)
	I0327 16:44:49.055130    8959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0327 16:44:49.062039    8959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0327 16:44:49.069437    8959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0327 16:44:49.077677    8959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0327 16:44:49.085256    8959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/stopped-upgrade-017000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0327 16:44:49.092808    8959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/stopped-upgrade-017000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0327 16:44:49.099591    8959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/stopped-upgrade-017000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0327 16:44:49.106548    8959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/stopped-upgrade-017000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0327 16:44:49.113625    8959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/files/etc/ssl/certs/69262.pem --> /usr/share/ca-certificates/69262.pem (1708 bytes)
	I0327 16:44:49.120217    8959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0327 16:44:49.127147    8959 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/6926.pem --> /usr/share/ca-certificates/6926.pem (1338 bytes)
	I0327 16:44:49.133683    8959 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0327 16:44:49.138975    8959 ssh_runner.go:195] Run: openssl version
	I0327 16:44:49.140729    8959 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0327 16:44:49.143676    8959 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0327 16:44:49.145147    8959 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 27 23:41 /usr/share/ca-certificates/minikubeCA.pem
	I0327 16:44:49.145171    8959 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0327 16:44:49.147044    8959 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0327 16:44:49.150049    8959 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6926.pem && ln -fs /usr/share/ca-certificates/6926.pem /etc/ssl/certs/6926.pem"
	I0327 16:44:49.153484    8959 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6926.pem
	I0327 16:44:49.155013    8959 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 27 23:28 /usr/share/ca-certificates/6926.pem
	I0327 16:44:49.155035    8959 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6926.pem
	I0327 16:44:49.156799    8959 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6926.pem /etc/ssl/certs/51391683.0"
	I0327 16:44:49.160018    8959 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69262.pem && ln -fs /usr/share/ca-certificates/69262.pem /etc/ssl/certs/69262.pem"
	I0327 16:44:49.162818    8959 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/69262.pem
	I0327 16:44:49.164125    8959 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 27 23:28 /usr/share/ca-certificates/69262.pem
	I0327 16:44:49.164142    8959 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69262.pem
	I0327 16:44:49.165933    8959 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69262.pem /etc/ssl/certs/3ec20f2e.0"
	I0327 16:44:49.169192    8959 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0327 16:44:49.170996    8959 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0327 16:44:49.172914    8959 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0327 16:44:49.175001    8959 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0327 16:44:49.176872    8959 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0327 16:44:49.178871    8959 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0327 16:44:49.180572    8959 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0327 16:44:49.182415    8959 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-017000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51421 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-017000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0327 16:44:49.182484    8959 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0327 16:44:49.192603    8959 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0327 16:44:49.195939    8959 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0327 16:44:49.195944    8959 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0327 16:44:49.195947    8959 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0327 16:44:49.195968    8959 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0327 16:44:49.198779    8959 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0327 16:44:49.199070    8959 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-017000" does not appear in /Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:44:49.199165    8959 kubeconfig.go:62] /Users/jenkins/minikube-integration/18485-6511/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-017000" cluster setting kubeconfig missing "stopped-upgrade-017000" context setting]
	I0327 16:44:49.199376    8959 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18485-6511/kubeconfig: {Name:mke46d0809919cfbe0118c5110926d6ce61bf373 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 16:44:49.199807    8959 kapi.go:59] client config for stopped-upgrade-017000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/stopped-upgrade-017000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/stopped-upgrade-017000/client.key", CAFile:"/Users/jenkins/minikube-integration/18485-6511/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103b96c70), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0327 16:44:49.200224    8959 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0327 16:44:49.202921    8959 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-017000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0327 16:44:49.202927    8959 kubeadm.go:1154] stopping kube-system containers ...
	I0327 16:44:49.202972    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0327 16:44:49.214145    8959 docker.go:483] Stopping containers: [f76badbaa6c8 c581a3f09ed3 56ea780761c8 c482501fc6ea e20a2e974eba 259c6c590ab2 32d18ef2c823 9262298e88bb]
	I0327 16:44:49.214211    8959 ssh_runner.go:195] Run: docker stop f76badbaa6c8 c581a3f09ed3 56ea780761c8 c482501fc6ea e20a2e974eba 259c6c590ab2 32d18ef2c823 9262298e88bb
	I0327 16:44:49.225042    8959 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0327 16:44:49.230321    8959 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0327 16:44:49.233363    8959 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0327 16:44:49.233375    8959 kubeadm.go:156] found existing configuration files:
	
	I0327 16:44:49.233399    8959 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51421 /etc/kubernetes/admin.conf
	I0327 16:44:49.236419    8959 kubeadm.go:162] "https://control-plane.minikube.internal:51421" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51421 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0327 16:44:49.236443    8959 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0327 16:44:49.238959    8959 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51421 /etc/kubernetes/kubelet.conf
	I0327 16:44:49.241445    8959 kubeadm.go:162] "https://control-plane.minikube.internal:51421" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51421 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0327 16:44:49.241465    8959 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0327 16:44:49.244386    8959 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51421 /etc/kubernetes/controller-manager.conf
	I0327 16:44:49.246798    8959 kubeadm.go:162] "https://control-plane.minikube.internal:51421" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51421 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0327 16:44:49.246821    8959 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0327 16:44:49.249560    8959 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51421 /etc/kubernetes/scheduler.conf
	I0327 16:44:49.252519    8959 kubeadm.go:162] "https://control-plane.minikube.internal:51421" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51421 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0327 16:44:49.252539    8959 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0327 16:44:49.255273    8959 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0327 16:44:49.257845    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0327 16:44:49.281301    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0327 16:44:50.067492    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0327 16:44:50.200876    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0327 16:44:50.223766    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0327 16:44:50.249021    8959 api_server.go:52] waiting for apiserver process to appear ...
	I0327 16:44:50.249268    8959 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 16:44:50.751187    8959 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 16:44:51.251149    8959 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 16:44:51.255304    8959 api_server.go:72] duration metric: took 1.006313958s to wait for apiserver process to appear ...
	I0327 16:44:51.255315    8959 api_server.go:88] waiting for apiserver healthz status ...
	I0327 16:44:51.255328    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:44:56.257358    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:44:56.257395    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:45:01.257489    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:45:01.257524    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:45:06.257680    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:45:06.257734    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:45:11.258063    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:45:11.258118    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:45:16.258546    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:45:16.258609    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:45:21.259368    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:45:21.259478    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:45:26.260652    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:45:26.260700    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:45:31.259941    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:45:31.259988    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:45:36.259579    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:45:36.259629    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:45:41.258741    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:45:41.258768    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:45:46.259730    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:45:46.259751    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:45:51.261006    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:45:51.261142    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:45:51.272237    8959 logs.go:276] 2 containers: [aa3afc402172 e20a2e974eba]
	I0327 16:45:51.272324    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:45:51.282809    8959 logs.go:276] 2 containers: [d74fd36a706c 56ea780761c8]
	I0327 16:45:51.282883    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:45:51.293656    8959 logs.go:276] 1 containers: [25c0222d62b6]
	I0327 16:45:51.293717    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:45:51.303876    8959 logs.go:276] 2 containers: [62111993cdda c581a3f09ed3]
	I0327 16:45:51.303936    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:45:51.314276    8959 logs.go:276] 1 containers: [0dc2c64bef5d]
	I0327 16:45:51.314343    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:45:51.325719    8959 logs.go:276] 2 containers: [fa00948ec909 f76badbaa6c8]
	I0327 16:45:51.325781    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:45:51.337154    8959 logs.go:276] 0 containers: []
	W0327 16:45:51.337166    8959 logs.go:278] No container was found matching "kindnet"
	I0327 16:45:51.337222    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:45:51.356195    8959 logs.go:276] 2 containers: [396dc8c2621b ffd8ac20b899]
	I0327 16:45:51.356212    8959 logs.go:123] Gathering logs for container status ...
	I0327 16:45:51.356218    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:45:51.369067    8959 logs.go:123] Gathering logs for etcd [d74fd36a706c] ...
	I0327 16:45:51.369078    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74fd36a706c"
	I0327 16:45:51.383007    8959 logs.go:123] Gathering logs for kube-controller-manager [fa00948ec909] ...
	I0327 16:45:51.383021    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa00948ec909"
	I0327 16:45:51.400901    8959 logs.go:123] Gathering logs for storage-provisioner [396dc8c2621b] ...
	I0327 16:45:51.400912    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396dc8c2621b"
	I0327 16:45:51.413380    8959 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:45:51.413391    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:45:51.528133    8959 logs.go:123] Gathering logs for kube-scheduler [62111993cdda] ...
	I0327 16:45:51.528147    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62111993cdda"
	I0327 16:45:51.540984    8959 logs.go:123] Gathering logs for kube-controller-manager [f76badbaa6c8] ...
	I0327 16:45:51.540994    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76badbaa6c8"
	I0327 16:45:51.555174    8959 logs.go:123] Gathering logs for storage-provisioner [ffd8ac20b899] ...
	I0327 16:45:51.555189    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffd8ac20b899"
	I0327 16:45:51.566491    8959 logs.go:123] Gathering logs for dmesg ...
	I0327 16:45:51.566502    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:45:51.571180    8959 logs.go:123] Gathering logs for kube-apiserver [aa3afc402172] ...
	I0327 16:45:51.571188    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa3afc402172"
	I0327 16:45:51.585028    8959 logs.go:123] Gathering logs for kube-scheduler [c581a3f09ed3] ...
	I0327 16:45:51.585039    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c581a3f09ed3"
	I0327 16:45:51.600186    8959 logs.go:123] Gathering logs for coredns [25c0222d62b6] ...
	I0327 16:45:51.600196    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25c0222d62b6"
	I0327 16:45:51.611616    8959 logs.go:123] Gathering logs for kube-proxy [0dc2c64bef5d] ...
	I0327 16:45:51.611626    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc2c64bef5d"
	I0327 16:45:51.623528    8959 logs.go:123] Gathering logs for Docker ...
	I0327 16:45:51.623537    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:45:51.649714    8959 logs.go:123] Gathering logs for kubelet ...
	I0327 16:45:51.649737    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0327 16:45:51.687016    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:45:51.687109    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:45:51.688114    8959 logs.go:123] Gathering logs for kube-apiserver [e20a2e974eba] ...
	I0327 16:45:51.688121    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e20a2e974eba"
	I0327 16:45:51.730301    8959 logs.go:123] Gathering logs for etcd [56ea780761c8] ...
	I0327 16:45:51.730311    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ea780761c8"
	I0327 16:45:51.745852    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:45:51.745861    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 16:45:51.745894    8959 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0327 16:45:51.745901    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	  Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:45:51.745905    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	  Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:45:51.745911    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:45:51.745913    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:46:01.747015    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:46:06.746949    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:46:06.747095    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:46:06.761512    8959 logs.go:276] 2 containers: [aa3afc402172 e20a2e974eba]
	I0327 16:46:06.761595    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:46:06.781524    8959 logs.go:276] 2 containers: [d74fd36a706c 56ea780761c8]
	I0327 16:46:06.781598    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:46:06.792224    8959 logs.go:276] 1 containers: [25c0222d62b6]
	I0327 16:46:06.792293    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:46:06.805526    8959 logs.go:276] 2 containers: [62111993cdda c581a3f09ed3]
	I0327 16:46:06.805598    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:46:06.816166    8959 logs.go:276] 1 containers: [0dc2c64bef5d]
	I0327 16:46:06.816235    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:46:06.828311    8959 logs.go:276] 2 containers: [fa00948ec909 f76badbaa6c8]
	I0327 16:46:06.828378    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:46:06.838597    8959 logs.go:276] 0 containers: []
	W0327 16:46:06.838610    8959 logs.go:278] No container was found matching "kindnet"
	I0327 16:46:06.838670    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:46:06.849722    8959 logs.go:276] 2 containers: [396dc8c2621b ffd8ac20b899]
	I0327 16:46:06.849739    8959 logs.go:123] Gathering logs for dmesg ...
	I0327 16:46:06.849744    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:46:06.854438    8959 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:46:06.854445    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:46:06.889652    8959 logs.go:123] Gathering logs for coredns [25c0222d62b6] ...
	I0327 16:46:06.889663    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25c0222d62b6"
	I0327 16:46:06.901312    8959 logs.go:123] Gathering logs for kube-scheduler [62111993cdda] ...
	I0327 16:46:06.901323    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62111993cdda"
	I0327 16:46:06.914598    8959 logs.go:123] Gathering logs for kube-scheduler [c581a3f09ed3] ...
	I0327 16:46:06.914609    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c581a3f09ed3"
	I0327 16:46:06.931247    8959 logs.go:123] Gathering logs for etcd [56ea780761c8] ...
	I0327 16:46:06.931258    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ea780761c8"
	I0327 16:46:06.945644    8959 logs.go:123] Gathering logs for kube-controller-manager [f76badbaa6c8] ...
	I0327 16:46:06.945654    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76badbaa6c8"
	I0327 16:46:06.960242    8959 logs.go:123] Gathering logs for storage-provisioner [396dc8c2621b] ...
	I0327 16:46:06.960252    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396dc8c2621b"
	I0327 16:46:06.972084    8959 logs.go:123] Gathering logs for Docker ...
	I0327 16:46:06.972094    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:46:06.997671    8959 logs.go:123] Gathering logs for kube-controller-manager [fa00948ec909] ...
	I0327 16:46:06.997680    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa00948ec909"
	I0327 16:46:07.015134    8959 logs.go:123] Gathering logs for kubelet ...
	I0327 16:46:07.015144    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0327 16:46:07.051258    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:46:07.051352    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:46:07.052345    8959 logs.go:123] Gathering logs for kube-apiserver [aa3afc402172] ...
	I0327 16:46:07.052351    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa3afc402172"
	I0327 16:46:07.066878    8959 logs.go:123] Gathering logs for kube-apiserver [e20a2e974eba] ...
	I0327 16:46:07.066889    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e20a2e974eba"
	I0327 16:46:07.105763    8959 logs.go:123] Gathering logs for etcd [d74fd36a706c] ...
	I0327 16:46:07.105774    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74fd36a706c"
	I0327 16:46:07.119531    8959 logs.go:123] Gathering logs for kube-proxy [0dc2c64bef5d] ...
	I0327 16:46:07.119545    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc2c64bef5d"
	I0327 16:46:07.132083    8959 logs.go:123] Gathering logs for storage-provisioner [ffd8ac20b899] ...
	I0327 16:46:07.132094    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffd8ac20b899"
	I0327 16:46:07.143524    8959 logs.go:123] Gathering logs for container status ...
	I0327 16:46:07.143536    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:46:07.157421    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:46:07.157434    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 16:46:07.157460    8959 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0327 16:46:07.157464    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	  Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:46:07.157468    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	  Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:46:07.157473    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:46:07.157475    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:46:17.159098    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:46:22.159617    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:46:22.159828    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:46:22.181812    8959 logs.go:276] 2 containers: [aa3afc402172 e20a2e974eba]
	I0327 16:46:22.181906    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:46:22.196094    8959 logs.go:276] 2 containers: [d74fd36a706c 56ea780761c8]
	I0327 16:46:22.196168    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:46:22.208391    8959 logs.go:276] 1 containers: [25c0222d62b6]
	I0327 16:46:22.208456    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:46:22.219379    8959 logs.go:276] 2 containers: [62111993cdda c581a3f09ed3]
	I0327 16:46:22.219456    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:46:22.230153    8959 logs.go:276] 1 containers: [0dc2c64bef5d]
	I0327 16:46:22.230220    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:46:22.245464    8959 logs.go:276] 2 containers: [fa00948ec909 f76badbaa6c8]
	I0327 16:46:22.245532    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:46:22.255680    8959 logs.go:276] 0 containers: []
	W0327 16:46:22.255691    8959 logs.go:278] No container was found matching "kindnet"
	I0327 16:46:22.255751    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:46:22.266028    8959 logs.go:276] 2 containers: [396dc8c2621b ffd8ac20b899]
	I0327 16:46:22.266047    8959 logs.go:123] Gathering logs for kubelet ...
	I0327 16:46:22.266052    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0327 16:46:22.305454    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:46:22.305565    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:46:22.306780    8959 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:46:22.306789    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:46:22.346208    8959 logs.go:123] Gathering logs for etcd [56ea780761c8] ...
	I0327 16:46:22.346221    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ea780761c8"
	I0327 16:46:22.368414    8959 logs.go:123] Gathering logs for kube-scheduler [62111993cdda] ...
	I0327 16:46:22.368427    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62111993cdda"
	I0327 16:46:22.380105    8959 logs.go:123] Gathering logs for kube-scheduler [c581a3f09ed3] ...
	I0327 16:46:22.380118    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c581a3f09ed3"
	I0327 16:46:22.397764    8959 logs.go:123] Gathering logs for kube-controller-manager [f76badbaa6c8] ...
	I0327 16:46:22.397775    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76badbaa6c8"
	I0327 16:46:22.415721    8959 logs.go:123] Gathering logs for storage-provisioner [396dc8c2621b] ...
	I0327 16:46:22.415733    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396dc8c2621b"
	I0327 16:46:22.427794    8959 logs.go:123] Gathering logs for dmesg ...
	I0327 16:46:22.427808    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:46:22.432034    8959 logs.go:123] Gathering logs for kube-apiserver [e20a2e974eba] ...
	I0327 16:46:22.432042    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e20a2e974eba"
	I0327 16:46:22.469494    8959 logs.go:123] Gathering logs for etcd [d74fd36a706c] ...
	I0327 16:46:22.469505    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74fd36a706c"
	I0327 16:46:22.483450    8959 logs.go:123] Gathering logs for kube-proxy [0dc2c64bef5d] ...
	I0327 16:46:22.483472    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc2c64bef5d"
	I0327 16:46:22.495278    8959 logs.go:123] Gathering logs for kube-controller-manager [fa00948ec909] ...
	I0327 16:46:22.495291    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa00948ec909"
	I0327 16:46:22.513185    8959 logs.go:123] Gathering logs for container status ...
	I0327 16:46:22.513196    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:46:22.526860    8959 logs.go:123] Gathering logs for kube-apiserver [aa3afc402172] ...
	I0327 16:46:22.526872    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa3afc402172"
	I0327 16:46:22.541173    8959 logs.go:123] Gathering logs for storage-provisioner [ffd8ac20b899] ...
	I0327 16:46:22.541184    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffd8ac20b899"
	I0327 16:46:22.555923    8959 logs.go:123] Gathering logs for coredns [25c0222d62b6] ...
	I0327 16:46:22.555935    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25c0222d62b6"
	I0327 16:46:22.567478    8959 logs.go:123] Gathering logs for Docker ...
	I0327 16:46:22.567489    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:46:22.592313    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:46:22.592322    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 16:46:22.592344    8959 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0327 16:46:22.592349    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	  Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:46:22.592352    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	  Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:46:22.592356    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:46:22.592359    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:46:32.595506    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:46:37.597609    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:46:37.597727    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:46:37.610686    8959 logs.go:276] 2 containers: [aa3afc402172 e20a2e974eba]
	I0327 16:46:37.610764    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:46:37.627111    8959 logs.go:276] 2 containers: [d74fd36a706c 56ea780761c8]
	I0327 16:46:37.627203    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:46:37.638093    8959 logs.go:276] 1 containers: [25c0222d62b6]
	I0327 16:46:37.638155    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:46:37.648924    8959 logs.go:276] 2 containers: [62111993cdda c581a3f09ed3]
	I0327 16:46:37.648987    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:46:37.659345    8959 logs.go:276] 1 containers: [0dc2c64bef5d]
	I0327 16:46:37.659427    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:46:37.670444    8959 logs.go:276] 2 containers: [fa00948ec909 f76badbaa6c8]
	I0327 16:46:37.670527    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:46:37.680966    8959 logs.go:276] 0 containers: []
	W0327 16:46:37.680976    8959 logs.go:278] No container was found matching "kindnet"
	I0327 16:46:37.681029    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:46:37.691827    8959 logs.go:276] 2 containers: [396dc8c2621b ffd8ac20b899]
	I0327 16:46:37.691847    8959 logs.go:123] Gathering logs for kubelet ...
	I0327 16:46:37.691853    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0327 16:46:37.729431    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:46:37.729527    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:46:37.730587    8959 logs.go:123] Gathering logs for coredns [25c0222d62b6] ...
	I0327 16:46:37.730594    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25c0222d62b6"
	I0327 16:46:37.742190    8959 logs.go:123] Gathering logs for Docker ...
	I0327 16:46:37.742200    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:46:37.766778    8959 logs.go:123] Gathering logs for storage-provisioner [ffd8ac20b899] ...
	I0327 16:46:37.766786    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffd8ac20b899"
	I0327 16:46:37.780995    8959 logs.go:123] Gathering logs for kube-apiserver [aa3afc402172] ...
	I0327 16:46:37.781011    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa3afc402172"
	I0327 16:46:37.795271    8959 logs.go:123] Gathering logs for kube-controller-manager [fa00948ec909] ...
	I0327 16:46:37.795281    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa00948ec909"
	I0327 16:46:37.812991    8959 logs.go:123] Gathering logs for kube-controller-manager [f76badbaa6c8] ...
	I0327 16:46:37.813003    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76badbaa6c8"
	I0327 16:46:37.827181    8959 logs.go:123] Gathering logs for storage-provisioner [396dc8c2621b] ...
	I0327 16:46:37.827192    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396dc8c2621b"
	I0327 16:46:37.838687    8959 logs.go:123] Gathering logs for kube-scheduler [c581a3f09ed3] ...
	I0327 16:46:37.838702    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c581a3f09ed3"
	I0327 16:46:37.853441    8959 logs.go:123] Gathering logs for kube-proxy [0dc2c64bef5d] ...
	I0327 16:46:37.853451    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc2c64bef5d"
	I0327 16:46:37.865120    8959 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:46:37.865131    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:46:37.899621    8959 logs.go:123] Gathering logs for kube-apiserver [e20a2e974eba] ...
	I0327 16:46:37.899632    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e20a2e974eba"
	I0327 16:46:37.941472    8959 logs.go:123] Gathering logs for etcd [d74fd36a706c] ...
	I0327 16:46:37.941484    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74fd36a706c"
	I0327 16:46:37.955184    8959 logs.go:123] Gathering logs for kube-scheduler [62111993cdda] ...
	I0327 16:46:37.955193    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62111993cdda"
	I0327 16:46:37.967025    8959 logs.go:123] Gathering logs for dmesg ...
	I0327 16:46:37.967035    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:46:37.970965    8959 logs.go:123] Gathering logs for etcd [56ea780761c8] ...
	I0327 16:46:37.970974    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ea780761c8"
	I0327 16:46:37.985960    8959 logs.go:123] Gathering logs for container status ...
	I0327 16:46:37.985971    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:46:37.997584    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:46:37.997595    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 16:46:37.997621    8959 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0327 16:46:37.997625    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	  Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:46:37.997629    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	  Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:46:37.997633    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:46:37.997635    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:46:48.001454    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:46:53.003861    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:46:53.004122    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:46:53.026691    8959 logs.go:276] 2 containers: [aa3afc402172 e20a2e974eba]
	I0327 16:46:53.026808    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:46:53.042611    8959 logs.go:276] 2 containers: [d74fd36a706c 56ea780761c8]
	I0327 16:46:53.042708    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:46:53.055789    8959 logs.go:276] 1 containers: [25c0222d62b6]
	I0327 16:46:53.055859    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:46:53.067196    8959 logs.go:276] 2 containers: [62111993cdda c581a3f09ed3]
	I0327 16:46:53.067268    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:46:53.077749    8959 logs.go:276] 1 containers: [0dc2c64bef5d]
	I0327 16:46:53.077828    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:46:53.089692    8959 logs.go:276] 2 containers: [fa00948ec909 f76badbaa6c8]
	I0327 16:46:53.089765    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:46:53.100023    8959 logs.go:276] 0 containers: []
	W0327 16:46:53.100034    8959 logs.go:278] No container was found matching "kindnet"
	I0327 16:46:53.100088    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:46:53.110058    8959 logs.go:276] 2 containers: [396dc8c2621b ffd8ac20b899]
	I0327 16:46:53.110077    8959 logs.go:123] Gathering logs for dmesg ...
	I0327 16:46:53.110082    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:46:53.114188    8959 logs.go:123] Gathering logs for kube-apiserver [aa3afc402172] ...
	I0327 16:46:53.114194    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa3afc402172"
	I0327 16:46:53.128688    8959 logs.go:123] Gathering logs for kube-scheduler [62111993cdda] ...
	I0327 16:46:53.128699    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62111993cdda"
	I0327 16:46:53.140095    8959 logs.go:123] Gathering logs for storage-provisioner [ffd8ac20b899] ...
	I0327 16:46:53.140105    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffd8ac20b899"
	I0327 16:46:53.155103    8959 logs.go:123] Gathering logs for Docker ...
	I0327 16:46:53.155116    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:46:53.179655    8959 logs.go:123] Gathering logs for etcd [d74fd36a706c] ...
	I0327 16:46:53.179664    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74fd36a706c"
	I0327 16:46:53.194724    8959 logs.go:123] Gathering logs for kube-scheduler [c581a3f09ed3] ...
	I0327 16:46:53.194735    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c581a3f09ed3"
	I0327 16:46:53.210060    8959 logs.go:123] Gathering logs for kube-proxy [0dc2c64bef5d] ...
	I0327 16:46:53.210072    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc2c64bef5d"
	I0327 16:46:53.222208    8959 logs.go:123] Gathering logs for kube-controller-manager [fa00948ec909] ...
	I0327 16:46:53.222218    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa00948ec909"
	I0327 16:46:53.240014    8959 logs.go:123] Gathering logs for kubelet ...
	I0327 16:46:53.240024    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0327 16:46:53.276744    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:46:53.276838    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:46:53.277903    8959 logs.go:123] Gathering logs for coredns [25c0222d62b6] ...
	I0327 16:46:53.277907    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25c0222d62b6"
	I0327 16:46:53.289018    8959 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:46:53.289029    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:46:53.325072    8959 logs.go:123] Gathering logs for kube-apiserver [e20a2e974eba] ...
	I0327 16:46:53.325083    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e20a2e974eba"
	I0327 16:46:53.367904    8959 logs.go:123] Gathering logs for etcd [56ea780761c8] ...
	I0327 16:46:53.367914    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ea780761c8"
	I0327 16:46:53.382206    8959 logs.go:123] Gathering logs for kube-controller-manager [f76badbaa6c8] ...
	I0327 16:46:53.382217    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76badbaa6c8"
	I0327 16:46:53.395471    8959 logs.go:123] Gathering logs for storage-provisioner [396dc8c2621b] ...
	I0327 16:46:53.395481    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396dc8c2621b"
	I0327 16:46:53.406840    8959 logs.go:123] Gathering logs for container status ...
	I0327 16:46:53.406853    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:46:53.419047    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:46:53.419059    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 16:46:53.419088    8959 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0327 16:46:53.419094    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	  Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:46:53.419097    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	  Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:46:53.419101    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:46:53.419104    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:47:03.421060    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:47:08.423461    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:47:08.423815    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:47:08.456120    8959 logs.go:276] 2 containers: [aa3afc402172 e20a2e974eba]
	I0327 16:47:08.456258    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:47:08.474893    8959 logs.go:276] 2 containers: [d74fd36a706c 56ea780761c8]
	I0327 16:47:08.474980    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:47:08.489575    8959 logs.go:276] 1 containers: [25c0222d62b6]
	I0327 16:47:08.489655    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:47:08.501606    8959 logs.go:276] 2 containers: [62111993cdda c581a3f09ed3]
	I0327 16:47:08.501678    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:47:08.512208    8959 logs.go:276] 1 containers: [0dc2c64bef5d]
	I0327 16:47:08.512276    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:47:08.522777    8959 logs.go:276] 2 containers: [fa00948ec909 f76badbaa6c8]
	I0327 16:47:08.522853    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:47:08.532503    8959 logs.go:276] 0 containers: []
	W0327 16:47:08.532517    8959 logs.go:278] No container was found matching "kindnet"
	I0327 16:47:08.532582    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:47:08.543834    8959 logs.go:276] 2 containers: [396dc8c2621b ffd8ac20b899]
	I0327 16:47:08.543852    8959 logs.go:123] Gathering logs for kube-controller-manager [fa00948ec909] ...
	I0327 16:47:08.543858    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa00948ec909"
	I0327 16:47:08.561576    8959 logs.go:123] Gathering logs for storage-provisioner [396dc8c2621b] ...
	I0327 16:47:08.561589    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396dc8c2621b"
	I0327 16:47:08.572791    8959 logs.go:123] Gathering logs for storage-provisioner [ffd8ac20b899] ...
	I0327 16:47:08.572804    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffd8ac20b899"
	I0327 16:47:08.587109    8959 logs.go:123] Gathering logs for etcd [56ea780761c8] ...
	I0327 16:47:08.587122    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ea780761c8"
	I0327 16:47:08.601926    8959 logs.go:123] Gathering logs for kube-scheduler [62111993cdda] ...
	I0327 16:47:08.601943    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62111993cdda"
	I0327 16:47:08.614609    8959 logs.go:123] Gathering logs for container status ...
	I0327 16:47:08.614622    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:47:08.626888    8959 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:47:08.626902    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:47:08.663615    8959 logs.go:123] Gathering logs for kube-scheduler [c581a3f09ed3] ...
	I0327 16:47:08.663625    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c581a3f09ed3"
	I0327 16:47:08.680973    8959 logs.go:123] Gathering logs for etcd [d74fd36a706c] ...
	I0327 16:47:08.680984    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74fd36a706c"
	I0327 16:47:08.694354    8959 logs.go:123] Gathering logs for kube-proxy [0dc2c64bef5d] ...
	I0327 16:47:08.694365    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc2c64bef5d"
	I0327 16:47:08.705867    8959 logs.go:123] Gathering logs for kube-apiserver [aa3afc402172] ...
	I0327 16:47:08.705877    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa3afc402172"
	I0327 16:47:08.719483    8959 logs.go:123] Gathering logs for kube-apiserver [e20a2e974eba] ...
	I0327 16:47:08.719494    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e20a2e974eba"
	I0327 16:47:08.770044    8959 logs.go:123] Gathering logs for coredns [25c0222d62b6] ...
	I0327 16:47:08.770054    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25c0222d62b6"
	I0327 16:47:08.781980    8959 logs.go:123] Gathering logs for kube-controller-manager [f76badbaa6c8] ...
	I0327 16:47:08.781992    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76badbaa6c8"
	I0327 16:47:08.795118    8959 logs.go:123] Gathering logs for Docker ...
	I0327 16:47:08.795128    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:47:08.820595    8959 logs.go:123] Gathering logs for kubelet ...
	I0327 16:47:08.820611    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0327 16:47:08.858534    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:47:08.858633    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:47:08.859664    8959 logs.go:123] Gathering logs for dmesg ...
	I0327 16:47:08.859669    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:47:08.864014    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:47:08.864021    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 16:47:08.864048    8959 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0327 16:47:08.864053    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	  Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:47:08.864056    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	  Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:47:08.864061    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:47:08.864063    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:47:18.866440    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:47:23.868607    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:47:23.868812    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:47:23.880616    8959 logs.go:276] 2 containers: [aa3afc402172 e20a2e974eba]
	I0327 16:47:23.880702    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:47:23.891331    8959 logs.go:276] 2 containers: [d74fd36a706c 56ea780761c8]
	I0327 16:47:23.891395    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:47:23.901870    8959 logs.go:276] 1 containers: [25c0222d62b6]
	I0327 16:47:23.901937    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:47:23.913198    8959 logs.go:276] 2 containers: [62111993cdda c581a3f09ed3]
	I0327 16:47:23.913274    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:47:23.924129    8959 logs.go:276] 1 containers: [0dc2c64bef5d]
	I0327 16:47:23.924200    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:47:23.935179    8959 logs.go:276] 2 containers: [fa00948ec909 f76badbaa6c8]
	I0327 16:47:23.935257    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:47:23.951255    8959 logs.go:276] 0 containers: []
	W0327 16:47:23.951268    8959 logs.go:278] No container was found matching "kindnet"
	I0327 16:47:23.951326    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:47:23.962256    8959 logs.go:276] 2 containers: [396dc8c2621b ffd8ac20b899]
	I0327 16:47:23.962277    8959 logs.go:123] Gathering logs for coredns [25c0222d62b6] ...
	I0327 16:47:23.962284    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25c0222d62b6"
	I0327 16:47:23.973765    8959 logs.go:123] Gathering logs for kube-proxy [0dc2c64bef5d] ...
	I0327 16:47:23.973776    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc2c64bef5d"
	I0327 16:47:23.985545    8959 logs.go:123] Gathering logs for kube-controller-manager [f76badbaa6c8] ...
	I0327 16:47:23.985560    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76badbaa6c8"
	I0327 16:47:23.999111    8959 logs.go:123] Gathering logs for Docker ...
	I0327 16:47:23.999121    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:47:24.023067    8959 logs.go:123] Gathering logs for kubelet ...
	I0327 16:47:24.023077    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0327 16:47:24.058168    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:47:24.058261    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:47:24.059309    8959 logs.go:123] Gathering logs for dmesg ...
	I0327 16:47:24.059318    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:47:24.063804    8959 logs.go:123] Gathering logs for etcd [56ea780761c8] ...
	I0327 16:47:24.063810    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ea780761c8"
	I0327 16:47:24.078443    8959 logs.go:123] Gathering logs for kube-apiserver [aa3afc402172] ...
	I0327 16:47:24.078454    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa3afc402172"
	I0327 16:47:24.094055    8959 logs.go:123] Gathering logs for kube-scheduler [c581a3f09ed3] ...
	I0327 16:47:24.094067    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c581a3f09ed3"
	I0327 16:47:24.109126    8959 logs.go:123] Gathering logs for storage-provisioner [396dc8c2621b] ...
	I0327 16:47:24.109135    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396dc8c2621b"
	I0327 16:47:24.120728    8959 logs.go:123] Gathering logs for kube-scheduler [62111993cdda] ...
	I0327 16:47:24.120740    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62111993cdda"
	I0327 16:47:24.132642    8959 logs.go:123] Gathering logs for kube-controller-manager [fa00948ec909] ...
	I0327 16:47:24.132654    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa00948ec909"
	I0327 16:47:24.151061    8959 logs.go:123] Gathering logs for storage-provisioner [ffd8ac20b899] ...
	I0327 16:47:24.151073    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffd8ac20b899"
	I0327 16:47:24.165193    8959 logs.go:123] Gathering logs for container status ...
	I0327 16:47:24.165207    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:47:24.177109    8959 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:47:24.177118    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:47:24.213481    8959 logs.go:123] Gathering logs for kube-apiserver [e20a2e974eba] ...
	I0327 16:47:24.213491    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e20a2e974eba"
	I0327 16:47:24.259105    8959 logs.go:123] Gathering logs for etcd [d74fd36a706c] ...
	I0327 16:47:24.259116    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74fd36a706c"
	I0327 16:47:24.273047    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:47:24.273059    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 16:47:24.273082    8959 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0327 16:47:24.273085    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	  Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:47:24.273089    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	  Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:47:24.273092    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:47:24.273095    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:47:34.276040    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:47:39.278214    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:47:39.278469    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:47:39.302752    8959 logs.go:276] 2 containers: [aa3afc402172 e20a2e974eba]
	I0327 16:47:39.302856    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:47:39.319571    8959 logs.go:276] 2 containers: [d74fd36a706c 56ea780761c8]
	I0327 16:47:39.319667    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:47:39.335204    8959 logs.go:276] 1 containers: [25c0222d62b6]
	I0327 16:47:39.335278    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:47:39.346646    8959 logs.go:276] 2 containers: [62111993cdda c581a3f09ed3]
	I0327 16:47:39.346718    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:47:39.356821    8959 logs.go:276] 1 containers: [0dc2c64bef5d]
	I0327 16:47:39.356887    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:47:39.368070    8959 logs.go:276] 2 containers: [fa00948ec909 f76badbaa6c8]
	I0327 16:47:39.368137    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:47:39.378297    8959 logs.go:276] 0 containers: []
	W0327 16:47:39.378308    8959 logs.go:278] No container was found matching "kindnet"
	I0327 16:47:39.378365    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:47:39.388629    8959 logs.go:276] 2 containers: [396dc8c2621b ffd8ac20b899]
	I0327 16:47:39.388664    8959 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:47:39.388672    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:47:39.426239    8959 logs.go:123] Gathering logs for etcd [d74fd36a706c] ...
	I0327 16:47:39.426251    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74fd36a706c"
	I0327 16:47:39.439705    8959 logs.go:123] Gathering logs for kube-scheduler [62111993cdda] ...
	I0327 16:47:39.439715    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62111993cdda"
	I0327 16:47:39.451412    8959 logs.go:123] Gathering logs for storage-provisioner [ffd8ac20b899] ...
	I0327 16:47:39.451425    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffd8ac20b899"
	I0327 16:47:39.463036    8959 logs.go:123] Gathering logs for container status ...
	I0327 16:47:39.463048    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:47:39.475134    8959 logs.go:123] Gathering logs for etcd [56ea780761c8] ...
	I0327 16:47:39.475148    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ea780761c8"
	I0327 16:47:39.489471    8959 logs.go:123] Gathering logs for coredns [25c0222d62b6] ...
	I0327 16:47:39.489481    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25c0222d62b6"
	I0327 16:47:39.501074    8959 logs.go:123] Gathering logs for dmesg ...
	I0327 16:47:39.501085    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:47:39.505488    8959 logs.go:123] Gathering logs for kube-scheduler [c581a3f09ed3] ...
	I0327 16:47:39.505498    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c581a3f09ed3"
	I0327 16:47:39.520612    8959 logs.go:123] Gathering logs for kube-proxy [0dc2c64bef5d] ...
	I0327 16:47:39.520624    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc2c64bef5d"
	I0327 16:47:39.532517    8959 logs.go:123] Gathering logs for kube-controller-manager [fa00948ec909] ...
	I0327 16:47:39.532528    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa00948ec909"
	I0327 16:47:39.550101    8959 logs.go:123] Gathering logs for Docker ...
	I0327 16:47:39.550111    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:47:39.573155    8959 logs.go:123] Gathering logs for kubelet ...
	I0327 16:47:39.573164    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0327 16:47:39.609770    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:47:39.609864    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:47:39.610892    8959 logs.go:123] Gathering logs for kube-apiserver [aa3afc402172] ...
	I0327 16:47:39.610897    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa3afc402172"
	I0327 16:47:39.625010    8959 logs.go:123] Gathering logs for kube-apiserver [e20a2e974eba] ...
	I0327 16:47:39.625021    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e20a2e974eba"
	I0327 16:47:39.669384    8959 logs.go:123] Gathering logs for kube-controller-manager [f76badbaa6c8] ...
	I0327 16:47:39.669396    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76badbaa6c8"
	I0327 16:47:39.682572    8959 logs.go:123] Gathering logs for storage-provisioner [396dc8c2621b] ...
	I0327 16:47:39.682582    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396dc8c2621b"
	I0327 16:47:39.693962    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:47:39.693971    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 16:47:39.694008    8959 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0327 16:47:39.694012    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	  Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:47:39.694017    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	  Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:47:39.694022    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:47:39.694024    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:47:49.697835    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:47:54.700113    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:47:54.700446    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:47:54.730616    8959 logs.go:276] 2 containers: [aa3afc402172 e20a2e974eba]
	I0327 16:47:54.730730    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:47:54.748202    8959 logs.go:276] 2 containers: [d74fd36a706c 56ea780761c8]
	I0327 16:47:54.748292    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:47:54.761976    8959 logs.go:276] 1 containers: [25c0222d62b6]
	I0327 16:47:54.762049    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:47:54.777246    8959 logs.go:276] 2 containers: [62111993cdda c581a3f09ed3]
	I0327 16:47:54.777323    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:47:54.787880    8959 logs.go:276] 1 containers: [0dc2c64bef5d]
	I0327 16:47:54.787954    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:47:54.798382    8959 logs.go:276] 2 containers: [fa00948ec909 f76badbaa6c8]
	I0327 16:47:54.798452    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:47:54.809232    8959 logs.go:276] 0 containers: []
	W0327 16:47:54.809243    8959 logs.go:278] No container was found matching "kindnet"
	I0327 16:47:54.809302    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:47:54.821062    8959 logs.go:276] 2 containers: [396dc8c2621b ffd8ac20b899]
	I0327 16:47:54.821080    8959 logs.go:123] Gathering logs for etcd [56ea780761c8] ...
	I0327 16:47:54.821084    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ea780761c8"
	I0327 16:47:54.835111    8959 logs.go:123] Gathering logs for coredns [25c0222d62b6] ...
	I0327 16:47:54.835120    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25c0222d62b6"
	I0327 16:47:54.850684    8959 logs.go:123] Gathering logs for kube-scheduler [c581a3f09ed3] ...
	I0327 16:47:54.850696    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c581a3f09ed3"
	I0327 16:47:54.866194    8959 logs.go:123] Gathering logs for kube-controller-manager [fa00948ec909] ...
	I0327 16:47:54.866204    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa00948ec909"
	I0327 16:47:54.885468    8959 logs.go:123] Gathering logs for kube-controller-manager [f76badbaa6c8] ...
	I0327 16:47:54.885483    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76badbaa6c8"
	I0327 16:47:54.899849    8959 logs.go:123] Gathering logs for kube-apiserver [e20a2e974eba] ...
	I0327 16:47:54.899861    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e20a2e974eba"
	I0327 16:47:54.938612    8959 logs.go:123] Gathering logs for etcd [d74fd36a706c] ...
	I0327 16:47:54.938627    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74fd36a706c"
	I0327 16:47:54.952730    8959 logs.go:123] Gathering logs for container status ...
	I0327 16:47:54.952745    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:47:54.964776    8959 logs.go:123] Gathering logs for kubelet ...
	I0327 16:47:54.964786    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0327 16:47:55.001111    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:47:55.001208    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:47:55.002270    8959 logs.go:123] Gathering logs for kube-apiserver [aa3afc402172] ...
	I0327 16:47:55.002276    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa3afc402172"
	I0327 16:47:55.021199    8959 logs.go:123] Gathering logs for kube-scheduler [62111993cdda] ...
	I0327 16:47:55.021209    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62111993cdda"
	I0327 16:47:55.033017    8959 logs.go:123] Gathering logs for kube-proxy [0dc2c64bef5d] ...
	I0327 16:47:55.033028    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc2c64bef5d"
	I0327 16:47:55.044270    8959 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:47:55.044280    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:47:55.079972    8959 logs.go:123] Gathering logs for storage-provisioner [396dc8c2621b] ...
	I0327 16:47:55.079984    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396dc8c2621b"
	I0327 16:47:55.091387    8959 logs.go:123] Gathering logs for storage-provisioner [ffd8ac20b899] ...
	I0327 16:47:55.091397    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffd8ac20b899"
	I0327 16:47:55.102440    8959 logs.go:123] Gathering logs for Docker ...
	I0327 16:47:55.102451    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:47:55.125141    8959 logs.go:123] Gathering logs for dmesg ...
	I0327 16:47:55.125149    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:47:55.129055    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:47:55.129062    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 16:47:55.129085    8959 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0327 16:47:55.129089    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	  Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:47:55.129093    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	  Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:47:55.129097    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:47:55.129100    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:48:05.132237    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:48:10.134328    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:48:10.134570    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:48:10.160664    8959 logs.go:276] 2 containers: [aa3afc402172 e20a2e974eba]
	I0327 16:48:10.160772    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:48:10.179341    8959 logs.go:276] 2 containers: [d74fd36a706c 56ea780761c8]
	I0327 16:48:10.179427    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:48:10.192671    8959 logs.go:276] 1 containers: [25c0222d62b6]
	I0327 16:48:10.192744    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:48:10.203786    8959 logs.go:276] 2 containers: [62111993cdda c581a3f09ed3]
	I0327 16:48:10.203860    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:48:10.214303    8959 logs.go:276] 1 containers: [0dc2c64bef5d]
	I0327 16:48:10.214372    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:48:10.225345    8959 logs.go:276] 2 containers: [fa00948ec909 f76badbaa6c8]
	I0327 16:48:10.225413    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:48:10.236140    8959 logs.go:276] 0 containers: []
	W0327 16:48:10.236153    8959 logs.go:278] No container was found matching "kindnet"
	I0327 16:48:10.236208    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:48:10.246578    8959 logs.go:276] 2 containers: [396dc8c2621b ffd8ac20b899]
	I0327 16:48:10.246597    8959 logs.go:123] Gathering logs for kube-apiserver [aa3afc402172] ...
	I0327 16:48:10.246602    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa3afc402172"
	I0327 16:48:10.260145    8959 logs.go:123] Gathering logs for etcd [d74fd36a706c] ...
	I0327 16:48:10.260156    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74fd36a706c"
	I0327 16:48:10.274129    8959 logs.go:123] Gathering logs for etcd [56ea780761c8] ...
	I0327 16:48:10.274139    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ea780761c8"
	I0327 16:48:10.289089    8959 logs.go:123] Gathering logs for kube-controller-manager [f76badbaa6c8] ...
	I0327 16:48:10.289098    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76badbaa6c8"
	I0327 16:48:10.302286    8959 logs.go:123] Gathering logs for storage-provisioner [396dc8c2621b] ...
	I0327 16:48:10.302296    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396dc8c2621b"
	I0327 16:48:10.316849    8959 logs.go:123] Gathering logs for container status ...
	I0327 16:48:10.316860    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:48:10.329040    8959 logs.go:123] Gathering logs for kubelet ...
	I0327 16:48:10.329051    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0327 16:48:10.365447    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:48:10.365547    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:48:10.366607    8959 logs.go:123] Gathering logs for kube-apiserver [e20a2e974eba] ...
	I0327 16:48:10.366616    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e20a2e974eba"
	I0327 16:48:10.404693    8959 logs.go:123] Gathering logs for kube-controller-manager [fa00948ec909] ...
	I0327 16:48:10.404703    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa00948ec909"
	I0327 16:48:10.422133    8959 logs.go:123] Gathering logs for Docker ...
	I0327 16:48:10.422142    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:48:10.445614    8959 logs.go:123] Gathering logs for dmesg ...
	I0327 16:48:10.445624    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:48:10.450154    8959 logs.go:123] Gathering logs for kube-scheduler [62111993cdda] ...
	I0327 16:48:10.450160    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62111993cdda"
	I0327 16:48:10.462848    8959 logs.go:123] Gathering logs for kube-scheduler [c581a3f09ed3] ...
	I0327 16:48:10.462861    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c581a3f09ed3"
	I0327 16:48:10.478410    8959 logs.go:123] Gathering logs for storage-provisioner [ffd8ac20b899] ...
	I0327 16:48:10.478420    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffd8ac20b899"
	I0327 16:48:10.497252    8959 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:48:10.497264    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:48:10.533671    8959 logs.go:123] Gathering logs for coredns [25c0222d62b6] ...
	I0327 16:48:10.533682    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25c0222d62b6"
	I0327 16:48:10.545946    8959 logs.go:123] Gathering logs for kube-proxy [0dc2c64bef5d] ...
	I0327 16:48:10.545963    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc2c64bef5d"
	I0327 16:48:10.558478    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:48:10.558489    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 16:48:10.558513    8959 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0327 16:48:10.558519    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	  Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:48:10.558525    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	  Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:48:10.558530    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:48:10.558533    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:48:20.561269    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:48:25.563326    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:48:25.563536    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:48:25.576449    8959 logs.go:276] 2 containers: [aa3afc402172 e20a2e974eba]
	I0327 16:48:25.576531    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:48:25.592228    8959 logs.go:276] 2 containers: [d74fd36a706c 56ea780761c8]
	I0327 16:48:25.592295    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:48:25.602573    8959 logs.go:276] 1 containers: [25c0222d62b6]
	I0327 16:48:25.602645    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:48:25.612820    8959 logs.go:276] 2 containers: [62111993cdda c581a3f09ed3]
	I0327 16:48:25.612885    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:48:25.622931    8959 logs.go:276] 1 containers: [0dc2c64bef5d]
	I0327 16:48:25.623001    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:48:25.633604    8959 logs.go:276] 2 containers: [fa00948ec909 f76badbaa6c8]
	I0327 16:48:25.633678    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:48:25.644138    8959 logs.go:276] 0 containers: []
	W0327 16:48:25.644148    8959 logs.go:278] No container was found matching "kindnet"
	I0327 16:48:25.644204    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:48:25.654901    8959 logs.go:276] 2 containers: [396dc8c2621b ffd8ac20b899]
	I0327 16:48:25.654919    8959 logs.go:123] Gathering logs for container status ...
	I0327 16:48:25.654927    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:48:25.667526    8959 logs.go:123] Gathering logs for etcd [56ea780761c8] ...
	I0327 16:48:25.667536    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ea780761c8"
	I0327 16:48:25.682548    8959 logs.go:123] Gathering logs for coredns [25c0222d62b6] ...
	I0327 16:48:25.682558    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25c0222d62b6"
	I0327 16:48:25.693947    8959 logs.go:123] Gathering logs for kube-controller-manager [fa00948ec909] ...
	I0327 16:48:25.693958    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa00948ec909"
	I0327 16:48:25.711633    8959 logs.go:123] Gathering logs for storage-provisioner [ffd8ac20b899] ...
	I0327 16:48:25.711643    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffd8ac20b899"
	I0327 16:48:25.723076    8959 logs.go:123] Gathering logs for storage-provisioner [396dc8c2621b] ...
	I0327 16:48:25.723089    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396dc8c2621b"
	I0327 16:48:25.734839    8959 logs.go:123] Gathering logs for Docker ...
	I0327 16:48:25.734850    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:48:25.757690    8959 logs.go:123] Gathering logs for etcd [d74fd36a706c] ...
	I0327 16:48:25.757696    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74fd36a706c"
	I0327 16:48:25.772859    8959 logs.go:123] Gathering logs for kube-scheduler [62111993cdda] ...
	I0327 16:48:25.772873    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62111993cdda"
	I0327 16:48:25.784501    8959 logs.go:123] Gathering logs for kube-scheduler [c581a3f09ed3] ...
	I0327 16:48:25.784515    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c581a3f09ed3"
	I0327 16:48:25.799263    8959 logs.go:123] Gathering logs for kubelet ...
	I0327 16:48:25.799274    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0327 16:48:25.835357    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:48:25.835455    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:48:25.836454    8959 logs.go:123] Gathering logs for dmesg ...
	I0327 16:48:25.836461    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:48:25.840721    8959 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:48:25.840726    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:48:25.876861    8959 logs.go:123] Gathering logs for kube-apiserver [aa3afc402172] ...
	I0327 16:48:25.876875    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa3afc402172"
	I0327 16:48:25.890691    8959 logs.go:123] Gathering logs for kube-apiserver [e20a2e974eba] ...
	I0327 16:48:25.890702    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e20a2e974eba"
	I0327 16:48:25.930510    8959 logs.go:123] Gathering logs for kube-proxy [0dc2c64bef5d] ...
	I0327 16:48:25.930520    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc2c64bef5d"
	I0327 16:48:25.942956    8959 logs.go:123] Gathering logs for kube-controller-manager [f76badbaa6c8] ...
	I0327 16:48:25.942971    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76badbaa6c8"
	I0327 16:48:25.956528    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:48:25.956540    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 16:48:25.956565    8959 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0327 16:48:25.956569    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	  Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:48:25.956573    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	  Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:48:25.956577    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:48:25.956581    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:48:35.958717    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:48:40.961190    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:48:40.961301    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:48:40.983422    8959 logs.go:276] 2 containers: [aa3afc402172 e20a2e974eba]
	I0327 16:48:40.983506    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:48:40.994899    8959 logs.go:276] 2 containers: [d74fd36a706c 56ea780761c8]
	I0327 16:48:40.994973    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:48:41.005870    8959 logs.go:276] 1 containers: [25c0222d62b6]
	I0327 16:48:41.005947    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:48:41.017097    8959 logs.go:276] 2 containers: [62111993cdda c581a3f09ed3]
	I0327 16:48:41.017165    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:48:41.031062    8959 logs.go:276] 1 containers: [0dc2c64bef5d]
	I0327 16:48:41.031135    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:48:41.041797    8959 logs.go:276] 2 containers: [fa00948ec909 f76badbaa6c8]
	I0327 16:48:41.041861    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:48:41.052604    8959 logs.go:276] 0 containers: []
	W0327 16:48:41.052618    8959 logs.go:278] No container was found matching "kindnet"
	I0327 16:48:41.052672    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:48:41.066373    8959 logs.go:276] 2 containers: [396dc8c2621b ffd8ac20b899]
	I0327 16:48:41.066389    8959 logs.go:123] Gathering logs for container status ...
	I0327 16:48:41.066396    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:48:41.078090    8959 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:48:41.078102    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:48:41.116051    8959 logs.go:123] Gathering logs for kube-scheduler [c581a3f09ed3] ...
	I0327 16:48:41.116068    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c581a3f09ed3"
	I0327 16:48:41.131546    8959 logs.go:123] Gathering logs for storage-provisioner [ffd8ac20b899] ...
	I0327 16:48:41.131558    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ffd8ac20b899"
	I0327 16:48:41.142555    8959 logs.go:123] Gathering logs for storage-provisioner [396dc8c2621b] ...
	I0327 16:48:41.142567    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 396dc8c2621b"
	I0327 16:48:41.154109    8959 logs.go:123] Gathering logs for Docker ...
	I0327 16:48:41.154121    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:48:41.175683    8959 logs.go:123] Gathering logs for kubelet ...
	I0327 16:48:41.175693    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0327 16:48:41.211176    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:48:41.211275    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:48:41.212273    8959 logs.go:123] Gathering logs for kube-apiserver [aa3afc402172] ...
	I0327 16:48:41.212279    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa3afc402172"
	I0327 16:48:41.231075    8959 logs.go:123] Gathering logs for etcd [56ea780761c8] ...
	I0327 16:48:41.231091    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56ea780761c8"
	I0327 16:48:41.245830    8959 logs.go:123] Gathering logs for kube-scheduler [62111993cdda] ...
	I0327 16:48:41.245840    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62111993cdda"
	I0327 16:48:41.257867    8959 logs.go:123] Gathering logs for kube-controller-manager [fa00948ec909] ...
	I0327 16:48:41.257876    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fa00948ec909"
	I0327 16:48:41.278102    8959 logs.go:123] Gathering logs for coredns [25c0222d62b6] ...
	I0327 16:48:41.278113    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 25c0222d62b6"
	I0327 16:48:41.291986    8959 logs.go:123] Gathering logs for kube-proxy [0dc2c64bef5d] ...
	I0327 16:48:41.291996    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0dc2c64bef5d"
	I0327 16:48:41.303923    8959 logs.go:123] Gathering logs for kube-controller-manager [f76badbaa6c8] ...
	I0327 16:48:41.303934    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f76badbaa6c8"
	I0327 16:48:41.317407    8959 logs.go:123] Gathering logs for dmesg ...
	I0327 16:48:41.317416    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:48:41.321738    8959 logs.go:123] Gathering logs for kube-apiserver [e20a2e974eba] ...
	I0327 16:48:41.321745    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e20a2e974eba"
	I0327 16:48:41.358978    8959 logs.go:123] Gathering logs for etcd [d74fd36a706c] ...
	I0327 16:48:41.358988    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d74fd36a706c"
	I0327 16:48:41.379581    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:48:41.379600    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 16:48:41.379630    8959 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0327 16:48:41.379636    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	  Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:48:41.379640    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	  Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:48:41.379647    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:48:41.379651    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:48:51.383351    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:48:56.385413    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:48:56.385495    8959 kubeadm.go:591] duration metric: took 4m7.206701625s to restartPrimaryControlPlane
	W0327 16:48:56.385544    8959 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0327 16:48:56.385567    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0327 16:48:57.426024    8959 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.04047825s)
	I0327 16:48:57.426106    8959 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0327 16:48:57.430968    8959 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0327 16:48:57.433654    8959 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0327 16:48:57.436470    8959 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0327 16:48:57.436478    8959 kubeadm.go:156] found existing configuration files:
	
	I0327 16:48:57.436503    8959 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51421 /etc/kubernetes/admin.conf
	I0327 16:48:57.439014    8959 kubeadm.go:162] "https://control-plane.minikube.internal:51421" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51421 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0327 16:48:57.439037    8959 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0327 16:48:57.441647    8959 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51421 /etc/kubernetes/kubelet.conf
	I0327 16:48:57.444700    8959 kubeadm.go:162] "https://control-plane.minikube.internal:51421" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51421 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0327 16:48:57.444724    8959 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0327 16:48:57.448112    8959 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51421 /etc/kubernetes/controller-manager.conf
	I0327 16:48:57.450916    8959 kubeadm.go:162] "https://control-plane.minikube.internal:51421" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51421 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0327 16:48:57.450937    8959 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0327 16:48:57.453451    8959 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51421 /etc/kubernetes/scheduler.conf
	I0327 16:48:57.456483    8959 kubeadm.go:162] "https://control-plane.minikube.internal:51421" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51421 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0327 16:48:57.456504    8959 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0327 16:48:57.459544    8959 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0327 16:48:57.477398    8959 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0327 16:48:57.477435    8959 kubeadm.go:309] [preflight] Running pre-flight checks
	I0327 16:48:57.528531    8959 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0327 16:48:57.528591    8959 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0327 16:48:57.528644    8959 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0327 16:48:57.581082    8959 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0327 16:48:57.586244    8959 out.go:204]   - Generating certificates and keys ...
	I0327 16:48:57.586280    8959 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0327 16:48:57.586310    8959 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0327 16:48:57.586352    8959 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0327 16:48:57.586401    8959 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0327 16:48:57.586434    8959 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0327 16:48:57.586459    8959 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0327 16:48:57.586486    8959 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0327 16:48:57.586516    8959 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0327 16:48:57.586555    8959 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0327 16:48:57.586599    8959 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0327 16:48:57.586629    8959 kubeadm.go:309] [certs] Using the existing "sa" key
	I0327 16:48:57.586675    8959 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0327 16:48:57.742816    8959 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0327 16:48:57.795684    8959 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0327 16:48:57.894115    8959 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0327 16:48:58.061954    8959 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0327 16:48:58.092231    8959 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0327 16:48:58.092587    8959 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0327 16:48:58.092611    8959 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0327 16:48:58.182370    8959 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0327 16:48:58.184211    8959 out.go:204]   - Booting up control plane ...
	I0327 16:48:58.184255    8959 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0327 16:48:58.184301    8959 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0327 16:48:58.186492    8959 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0327 16:48:58.186885    8959 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0327 16:48:58.187789    8959 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0327 16:49:02.189334    8959 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.001352 seconds
	I0327 16:49:02.189399    8959 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0327 16:49:02.192694    8959 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0327 16:49:02.703240    8959 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0327 16:49:02.703414    8959 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-017000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0327 16:49:03.206560    8959 kubeadm.go:309] [bootstrap-token] Using token: jf7d6m.20yewdtyrk7ztvoa
	I0327 16:49:03.212955    8959 out.go:204]   - Configuring RBAC rules ...
	I0327 16:49:03.213019    8959 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0327 16:49:03.213064    8959 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0327 16:49:03.218532    8959 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0327 16:49:03.219556    8959 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0327 16:49:03.220146    8959 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0327 16:49:03.220985    8959 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0327 16:49:03.224107    8959 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0327 16:49:03.388861    8959 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0327 16:49:03.612724    8959 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0327 16:49:03.613095    8959 kubeadm.go:309] 
	I0327 16:49:03.613130    8959 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0327 16:49:03.613133    8959 kubeadm.go:309] 
	I0327 16:49:03.613176    8959 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0327 16:49:03.613185    8959 kubeadm.go:309] 
	I0327 16:49:03.613197    8959 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0327 16:49:03.613230    8959 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0327 16:49:03.613258    8959 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0327 16:49:03.613263    8959 kubeadm.go:309] 
	I0327 16:49:03.613288    8959 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0327 16:49:03.613296    8959 kubeadm.go:309] 
	I0327 16:49:03.613331    8959 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0327 16:49:03.613334    8959 kubeadm.go:309] 
	I0327 16:49:03.613364    8959 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0327 16:49:03.613399    8959 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0327 16:49:03.613442    8959 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0327 16:49:03.613446    8959 kubeadm.go:309] 
	I0327 16:49:03.613495    8959 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0327 16:49:03.613550    8959 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0327 16:49:03.613555    8959 kubeadm.go:309] 
	I0327 16:49:03.613599    8959 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token jf7d6m.20yewdtyrk7ztvoa \
	I0327 16:49:03.613660    8959 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8047b7e049f0384af96cc555849ef1f992fa8884768aff95c9a460200a82d884 \
	I0327 16:49:03.613672    8959 kubeadm.go:309] 	--control-plane 
	I0327 16:49:03.613675    8959 kubeadm.go:309] 
	I0327 16:49:03.613722    8959 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0327 16:49:03.613726    8959 kubeadm.go:309] 
	I0327 16:49:03.613766    8959 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token jf7d6m.20yewdtyrk7ztvoa \
	I0327 16:49:03.613831    8959 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:8047b7e049f0384af96cc555849ef1f992fa8884768aff95c9a460200a82d884 
	I0327 16:49:03.614049    8959 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0327 16:49:03.614058    8959 cni.go:84] Creating CNI manager for ""
	I0327 16:49:03.614066    8959 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 16:49:03.618206    8959 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0327 16:49:03.626272    8959 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0327 16:49:03.629185    8959 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0327 16:49:03.634046    8959 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0327 16:49:03.634097    8959 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0327 16:49:03.634146    8959 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-017000 minikube.k8s.io/updated_at=2024_03_27T16_49_03_0700 minikube.k8s.io/version=v1.33.0-beta.0 minikube.k8s.io/commit=1a940f980f77c9bfc8ef93678db8c1f49c9dd79d minikube.k8s.io/name=stopped-upgrade-017000 minikube.k8s.io/primary=true
	I0327 16:49:03.675422    8959 kubeadm.go:1107] duration metric: took 41.366667ms to wait for elevateKubeSystemPrivileges
	I0327 16:49:03.675437    8959 ops.go:34] apiserver oom_adj: -16
	W0327 16:49:03.675527    8959 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0327 16:49:03.675533    8959 kubeadm.go:393] duration metric: took 4m14.510520541s to StartCluster
	I0327 16:49:03.675542    8959 settings.go:142] acquiring lock: {Name:mk7a184fa834ec55a805b998fd083319e6561206 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 16:49:03.675626    8959 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:49:03.676025    8959 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18485-6511/kubeconfig: {Name:mke46d0809919cfbe0118c5110926d6ce61bf373 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 16:49:03.676238    8959 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 16:49:03.679238    8959 out.go:177] * Verifying Kubernetes components...
	I0327 16:49:03.676246    8959 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0327 16:49:03.676316    8959 config.go:182] Loaded profile config "stopped-upgrade-017000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 16:49:03.689261    8959 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0327 16:49:03.689281    8959 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-017000"
	I0327 16:49:03.689285    8959 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-017000"
	I0327 16:49:03.689296    8959 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-017000"
	I0327 16:49:03.689299    8959 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-017000"
	W0327 16:49:03.689300    8959 addons.go:243] addon storage-provisioner should already be in state true
	I0327 16:49:03.689323    8959 host.go:66] Checking if "stopped-upgrade-017000" exists ...
	I0327 16:49:03.689799    8959 retry.go:31] will retry after 671.004227ms: connect: dial unix /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/stopped-upgrade-017000/monitor: connect: connection refused
	I0327 16:49:03.693177    8959 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0327 16:49:03.697229    8959 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 16:49:03.697236    8959 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0327 16:49:03.697243    8959 sshutil.go:53] new ssh client: &{IP:localhost Port:51386 SSHKeyPath:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/stopped-upgrade-017000/id_rsa Username:docker}
	I0327 16:49:03.782643    8959 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0327 16:49:03.787600    8959 api_server.go:52] waiting for apiserver process to appear ...
	I0327 16:49:03.787650    8959 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0327 16:49:03.791658    8959 api_server.go:72] duration metric: took 115.413708ms to wait for apiserver process to appear ...
	I0327 16:49:03.791665    8959 api_server.go:88] waiting for apiserver healthz status ...
	I0327 16:49:03.791672    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:49:03.829177    8959 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0327 16:49:04.363865    8959 kapi.go:59] client config for stopped-upgrade-017000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/stopped-upgrade-017000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/stopped-upgrade-017000/client.key", CAFile:"/Users/jenkins/minikube-integration/18485-6511/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103b96c70), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0327 16:49:04.363994    8959 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-017000"
	W0327 16:49:04.364000    8959 addons.go:243] addon default-storageclass should already be in state true
	I0327 16:49:04.364011    8959 host.go:66] Checking if "stopped-upgrade-017000" exists ...
	I0327 16:49:04.364771    8959 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0327 16:49:04.364777    8959 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0327 16:49:04.364784    8959 sshutil.go:53] new ssh client: &{IP:localhost Port:51386 SSHKeyPath:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/stopped-upgrade-017000/id_rsa Username:docker}
	I0327 16:49:04.399189    8959 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0327 16:49:08.793652    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:49:08.793712    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:49:13.793877    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:49:13.793977    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:49:18.794420    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:49:18.794466    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:49:23.794826    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:49:23.794847    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:49:28.795311    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:49:28.795356    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:49:33.795710    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:49:33.795748    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0327 16:49:34.455419    8959 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0327 16:49:34.459838    8959 out.go:177] * Enabled addons: storage-provisioner
	I0327 16:49:34.471742    8959 addons.go:505] duration metric: took 30.796492375s for enable addons: enabled=[storage-provisioner]
	I0327 16:49:38.796885    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:49:38.796931    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:49:43.797008    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:49:43.797037    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:49:48.798306    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:49:48.798371    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:49:53.799982    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:49:53.800031    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:49:58.800433    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:49:58.800476    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:50:03.802571    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:50:03.802721    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:50:03.815192    8959 logs.go:276] 1 containers: [468c26aa74b2]
	I0327 16:50:03.815263    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:50:03.825556    8959 logs.go:276] 1 containers: [3dc8a850726c]
	I0327 16:50:03.825619    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:50:03.835993    8959 logs.go:276] 2 containers: [e91004f12c96 e4e006d1c1aa]
	I0327 16:50:03.836064    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:50:03.846726    8959 logs.go:276] 1 containers: [9bf0505a569f]
	I0327 16:50:03.846798    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:50:03.857022    8959 logs.go:276] 1 containers: [7ceb3f2f4d36]
	I0327 16:50:03.857080    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:50:03.868335    8959 logs.go:276] 1 containers: [938134fd49c1]
	I0327 16:50:03.868401    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:50:03.878175    8959 logs.go:276] 0 containers: []
	W0327 16:50:03.878187    8959 logs.go:278] No container was found matching "kindnet"
	I0327 16:50:03.878247    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:50:03.888510    8959 logs.go:276] 1 containers: [24d43651f94a]
	I0327 16:50:03.888526    8959 logs.go:123] Gathering logs for kube-controller-manager [938134fd49c1] ...
	I0327 16:50:03.888531    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 938134fd49c1"
	I0327 16:50:03.906792    8959 logs.go:123] Gathering logs for Docker ...
	I0327 16:50:03.906802    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:50:03.930111    8959 logs.go:123] Gathering logs for kubelet ...
	I0327 16:50:03.930120    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0327 16:50:03.946560    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:50:03.946654    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:50:03.966297    8959 logs.go:123] Gathering logs for dmesg ...
	I0327 16:50:03.966303    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:50:03.970510    8959 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:50:03.970517    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:50:04.006429    8959 logs.go:123] Gathering logs for etcd [3dc8a850726c] ...
	I0327 16:50:04.006440    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dc8a850726c"
	I0327 16:50:04.021385    8959 logs.go:123] Gathering logs for coredns [e4e006d1c1aa] ...
	I0327 16:50:04.021401    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e006d1c1aa"
	I0327 16:50:04.033636    8959 logs.go:123] Gathering logs for kube-proxy [7ceb3f2f4d36] ...
	I0327 16:50:04.033649    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ceb3f2f4d36"
	I0327 16:50:04.045314    8959 logs.go:123] Gathering logs for kube-apiserver [468c26aa74b2] ...
	I0327 16:50:04.045326    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 468c26aa74b2"
	I0327 16:50:04.059209    8959 logs.go:123] Gathering logs for coredns [e91004f12c96] ...
	I0327 16:50:04.059219    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e91004f12c96"
	I0327 16:50:04.070959    8959 logs.go:123] Gathering logs for kube-scheduler [9bf0505a569f] ...
	I0327 16:50:04.070970    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf0505a569f"
	I0327 16:50:04.086077    8959 logs.go:123] Gathering logs for storage-provisioner [24d43651f94a] ...
	I0327 16:50:04.086090    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24d43651f94a"
	I0327 16:50:04.097718    8959 logs.go:123] Gathering logs for container status ...
	I0327 16:50:04.097729    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:50:04.109220    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:50:04.109231    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 16:50:04.109255    8959 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0327 16:50:04.109261    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	  Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:50:04.109268    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	  Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:50:04.109274    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:50:04.109277    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:50:14.113117    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:50:19.115315    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:50:19.115612    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:50:19.140581    8959 logs.go:276] 1 containers: [468c26aa74b2]
	I0327 16:50:19.140709    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:50:19.157582    8959 logs.go:276] 1 containers: [3dc8a850726c]
	I0327 16:50:19.157665    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:50:19.171031    8959 logs.go:276] 2 containers: [e91004f12c96 e4e006d1c1aa]
	I0327 16:50:19.171111    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:50:19.182137    8959 logs.go:276] 1 containers: [9bf0505a569f]
	I0327 16:50:19.182206    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:50:19.192967    8959 logs.go:276] 1 containers: [7ceb3f2f4d36]
	I0327 16:50:19.193043    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:50:19.206568    8959 logs.go:276] 1 containers: [938134fd49c1]
	I0327 16:50:19.206634    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:50:19.218787    8959 logs.go:276] 0 containers: []
	W0327 16:50:19.218798    8959 logs.go:278] No container was found matching "kindnet"
	I0327 16:50:19.218855    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:50:19.229189    8959 logs.go:276] 1 containers: [24d43651f94a]
	I0327 16:50:19.229206    8959 logs.go:123] Gathering logs for etcd [3dc8a850726c] ...
	I0327 16:50:19.229212    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dc8a850726c"
	I0327 16:50:19.243514    8959 logs.go:123] Gathering logs for coredns [e4e006d1c1aa] ...
	I0327 16:50:19.243524    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e006d1c1aa"
	I0327 16:50:19.256272    8959 logs.go:123] Gathering logs for kube-scheduler [9bf0505a569f] ...
	I0327 16:50:19.256287    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf0505a569f"
	I0327 16:50:19.273489    8959 logs.go:123] Gathering logs for storage-provisioner [24d43651f94a] ...
	I0327 16:50:19.273507    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24d43651f94a"
	I0327 16:50:19.286168    8959 logs.go:123] Gathering logs for container status ...
	I0327 16:50:19.286180    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:50:19.298322    8959 logs.go:123] Gathering logs for kubelet ...
	I0327 16:50:19.298334    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0327 16:50:19.316046    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:50:19.316147    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:50:19.337115    8959 logs.go:123] Gathering logs for dmesg ...
	I0327 16:50:19.337131    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:50:19.341503    8959 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:50:19.341511    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:50:19.378032    8959 logs.go:123] Gathering logs for kube-controller-manager [938134fd49c1] ...
	I0327 16:50:19.378041    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 938134fd49c1"
	I0327 16:50:19.398218    8959 logs.go:123] Gathering logs for Docker ...
	I0327 16:50:19.398230    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:50:19.423337    8959 logs.go:123] Gathering logs for kube-apiserver [468c26aa74b2] ...
	I0327 16:50:19.423346    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 468c26aa74b2"
	I0327 16:50:19.442685    8959 logs.go:123] Gathering logs for coredns [e91004f12c96] ...
	I0327 16:50:19.442700    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e91004f12c96"
	I0327 16:50:19.462056    8959 logs.go:123] Gathering logs for kube-proxy [7ceb3f2f4d36] ...
	I0327 16:50:19.462067    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ceb3f2f4d36"
	I0327 16:50:19.479560    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:50:19.479570    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 16:50:19.479598    8959 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0327 16:50:19.479603    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	  Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:50:19.479607    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	  Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:50:19.479611    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:50:19.479616    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:50:29.481860    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:50:34.483956    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:50:34.484437    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:50:34.524555    8959 logs.go:276] 1 containers: [468c26aa74b2]
	I0327 16:50:34.524677    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:50:34.546090    8959 logs.go:276] 1 containers: [3dc8a850726c]
	I0327 16:50:34.546248    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:50:34.561034    8959 logs.go:276] 2 containers: [e91004f12c96 e4e006d1c1aa]
	I0327 16:50:34.561105    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:50:34.573122    8959 logs.go:276] 1 containers: [9bf0505a569f]
	I0327 16:50:34.573184    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:50:34.583922    8959 logs.go:276] 1 containers: [7ceb3f2f4d36]
	I0327 16:50:34.584000    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:50:34.594270    8959 logs.go:276] 1 containers: [938134fd49c1]
	I0327 16:50:34.594336    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:50:34.604796    8959 logs.go:276] 0 containers: []
	W0327 16:50:34.604808    8959 logs.go:278] No container was found matching "kindnet"
	I0327 16:50:34.604859    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:50:34.615881    8959 logs.go:276] 1 containers: [24d43651f94a]
	I0327 16:50:34.615903    8959 logs.go:123] Gathering logs for Docker ...
	I0327 16:50:34.615908    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:50:34.640666    8959 logs.go:123] Gathering logs for kubelet ...
	I0327 16:50:34.640674    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0327 16:50:34.658419    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:50:34.658515    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:50:34.678565    8959 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:50:34.678571    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:50:34.718395    8959 logs.go:123] Gathering logs for etcd [3dc8a850726c] ...
	I0327 16:50:34.718406    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dc8a850726c"
	I0327 16:50:34.732795    8959 logs.go:123] Gathering logs for kube-scheduler [9bf0505a569f] ...
	I0327 16:50:34.732803    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf0505a569f"
	I0327 16:50:34.748321    8959 logs.go:123] Gathering logs for storage-provisioner [24d43651f94a] ...
	I0327 16:50:34.748335    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24d43651f94a"
	I0327 16:50:34.759869    8959 logs.go:123] Gathering logs for kube-controller-manager [938134fd49c1] ...
	I0327 16:50:34.759881    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 938134fd49c1"
	I0327 16:50:34.777867    8959 logs.go:123] Gathering logs for container status ...
	I0327 16:50:34.777880    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:50:34.791796    8959 logs.go:123] Gathering logs for dmesg ...
	I0327 16:50:34.791808    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:50:34.796599    8959 logs.go:123] Gathering logs for kube-apiserver [468c26aa74b2] ...
	I0327 16:50:34.796607    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 468c26aa74b2"
	I0327 16:50:34.811108    8959 logs.go:123] Gathering logs for coredns [e91004f12c96] ...
	I0327 16:50:34.811117    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e91004f12c96"
	I0327 16:50:34.822767    8959 logs.go:123] Gathering logs for coredns [e4e006d1c1aa] ...
	I0327 16:50:34.822781    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e006d1c1aa"
	I0327 16:50:34.834179    8959 logs.go:123] Gathering logs for kube-proxy [7ceb3f2f4d36] ...
	I0327 16:50:34.834189    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ceb3f2f4d36"
	I0327 16:50:34.845237    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:50:34.845246    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 16:50:34.845273    8959 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0327 16:50:34.845276    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	  Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:50:34.845279    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	  Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:50:34.845283    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:50:34.845286    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:50:44.847349    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:50:49.849974    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:50:49.850679    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:50:49.877176    8959 logs.go:276] 1 containers: [468c26aa74b2]
	I0327 16:50:49.877311    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:50:49.894840    8959 logs.go:276] 1 containers: [3dc8a850726c]
	I0327 16:50:49.894925    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:50:49.908996    8959 logs.go:276] 2 containers: [e91004f12c96 e4e006d1c1aa]
	I0327 16:50:49.909067    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:50:49.921108    8959 logs.go:276] 1 containers: [9bf0505a569f]
	I0327 16:50:49.921176    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:50:49.936797    8959 logs.go:276] 1 containers: [7ceb3f2f4d36]
	I0327 16:50:49.936866    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:50:49.947416    8959 logs.go:276] 1 containers: [938134fd49c1]
	I0327 16:50:49.947488    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:50:49.962007    8959 logs.go:276] 0 containers: []
	W0327 16:50:49.962020    8959 logs.go:278] No container was found matching "kindnet"
	I0327 16:50:49.962073    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:50:49.973992    8959 logs.go:276] 1 containers: [24d43651f94a]
	I0327 16:50:49.974006    8959 logs.go:123] Gathering logs for coredns [e4e006d1c1aa] ...
	I0327 16:50:49.974011    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e006d1c1aa"
	I0327 16:50:49.994634    8959 logs.go:123] Gathering logs for kube-proxy [7ceb3f2f4d36] ...
	I0327 16:50:49.994642    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ceb3f2f4d36"
	I0327 16:50:50.012775    8959 logs.go:123] Gathering logs for kube-controller-manager [938134fd49c1] ...
	I0327 16:50:50.012786    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 938134fd49c1"
	I0327 16:50:50.030726    8959 logs.go:123] Gathering logs for kube-scheduler [9bf0505a569f] ...
	I0327 16:50:50.030734    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf0505a569f"
	I0327 16:50:50.045774    8959 logs.go:123] Gathering logs for storage-provisioner [24d43651f94a] ...
	I0327 16:50:50.045789    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24d43651f94a"
	I0327 16:50:50.059288    8959 logs.go:123] Gathering logs for kubelet ...
	I0327 16:50:50.059302    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0327 16:50:50.075518    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:50:50.075610    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:50:50.095805    8959 logs.go:123] Gathering logs for dmesg ...
	I0327 16:50:50.095811    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:50:50.100669    8959 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:50:50.100678    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:50:50.135616    8959 logs.go:123] Gathering logs for kube-apiserver [468c26aa74b2] ...
	I0327 16:50:50.135630    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 468c26aa74b2"
	I0327 16:50:50.154607    8959 logs.go:123] Gathering logs for etcd [3dc8a850726c] ...
	I0327 16:50:50.154617    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dc8a850726c"
	I0327 16:50:50.168331    8959 logs.go:123] Gathering logs for coredns [e91004f12c96] ...
	I0327 16:50:50.168340    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e91004f12c96"
	I0327 16:50:50.181066    8959 logs.go:123] Gathering logs for Docker ...
	I0327 16:50:50.181078    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:50:50.205482    8959 logs.go:123] Gathering logs for container status ...
	I0327 16:50:50.205490    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:50:50.216509    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:50:50.216518    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 16:50:50.216546    8959 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0327 16:50:50.216553    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	  Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:50:50.216559    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	  Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:50:50.216570    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:50:50.216573    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:51:00.220404    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:51:05.222759    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:51:05.223170    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:51:05.258493    8959 logs.go:276] 1 containers: [468c26aa74b2]
	I0327 16:51:05.258607    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:51:05.278907    8959 logs.go:276] 1 containers: [3dc8a850726c]
	I0327 16:51:05.279011    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:51:05.293460    8959 logs.go:276] 2 containers: [e91004f12c96 e4e006d1c1aa]
	I0327 16:51:05.293538    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:51:05.305377    8959 logs.go:276] 1 containers: [9bf0505a569f]
	I0327 16:51:05.305448    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:51:05.316448    8959 logs.go:276] 1 containers: [7ceb3f2f4d36]
	I0327 16:51:05.316522    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:51:05.330173    8959 logs.go:276] 1 containers: [938134fd49c1]
	I0327 16:51:05.330245    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:51:05.340708    8959 logs.go:276] 0 containers: []
	W0327 16:51:05.340723    8959 logs.go:278] No container was found matching "kindnet"
	I0327 16:51:05.340784    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:51:05.350957    8959 logs.go:276] 1 containers: [24d43651f94a]
	I0327 16:51:05.350970    8959 logs.go:123] Gathering logs for dmesg ...
	I0327 16:51:05.350975    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:51:05.355764    8959 logs.go:123] Gathering logs for kube-apiserver [468c26aa74b2] ...
	I0327 16:51:05.355770    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 468c26aa74b2"
	I0327 16:51:05.370328    8959 logs.go:123] Gathering logs for coredns [e91004f12c96] ...
	I0327 16:51:05.370342    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e91004f12c96"
	I0327 16:51:05.386631    8959 logs.go:123] Gathering logs for kube-scheduler [9bf0505a569f] ...
	I0327 16:51:05.386643    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf0505a569f"
	I0327 16:51:05.404862    8959 logs.go:123] Gathering logs for storage-provisioner [24d43651f94a] ...
	I0327 16:51:05.404872    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24d43651f94a"
	I0327 16:51:05.416454    8959 logs.go:123] Gathering logs for kubelet ...
	I0327 16:51:05.416465    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0327 16:51:05.432811    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:51:05.432902    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:51:05.452754    8959 logs.go:123] Gathering logs for etcd [3dc8a850726c] ...
	I0327 16:51:05.452759    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dc8a850726c"
	I0327 16:51:05.466488    8959 logs.go:123] Gathering logs for coredns [e4e006d1c1aa] ...
	I0327 16:51:05.466498    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e006d1c1aa"
	I0327 16:51:05.477961    8959 logs.go:123] Gathering logs for kube-proxy [7ceb3f2f4d36] ...
	I0327 16:51:05.477972    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ceb3f2f4d36"
	I0327 16:51:05.489531    8959 logs.go:123] Gathering logs for kube-controller-manager [938134fd49c1] ...
	I0327 16:51:05.489542    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 938134fd49c1"
	I0327 16:51:05.507764    8959 logs.go:123] Gathering logs for Docker ...
	I0327 16:51:05.507774    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:51:05.532518    8959 logs.go:123] Gathering logs for container status ...
	I0327 16:51:05.532526    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:51:05.545118    8959 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:51:05.545129    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:51:05.583017    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:51:05.583027    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 16:51:05.583051    8959 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0327 16:51:05.583055    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	  Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:51:05.583058    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	  Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:51:05.583063    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:51:05.583065    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:51:15.587017    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:51:20.589457    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:51:20.589900    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:51:20.625880    8959 logs.go:276] 1 containers: [468c26aa74b2]
	I0327 16:51:20.625999    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:51:20.645722    8959 logs.go:276] 1 containers: [3dc8a850726c]
	I0327 16:51:20.645810    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:51:20.660106    8959 logs.go:276] 4 containers: [4d5fe04a2de1 c5312da391dc e91004f12c96 e4e006d1c1aa]
	I0327 16:51:20.660181    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:51:20.671878    8959 logs.go:276] 1 containers: [9bf0505a569f]
	I0327 16:51:20.671948    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:51:20.682488    8959 logs.go:276] 1 containers: [7ceb3f2f4d36]
	I0327 16:51:20.682553    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:51:20.693087    8959 logs.go:276] 1 containers: [938134fd49c1]
	I0327 16:51:20.693160    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:51:20.703035    8959 logs.go:276] 0 containers: []
	W0327 16:51:20.703047    8959 logs.go:278] No container was found matching "kindnet"
	I0327 16:51:20.703105    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:51:20.713565    8959 logs.go:276] 1 containers: [24d43651f94a]
	I0327 16:51:20.713583    8959 logs.go:123] Gathering logs for kubelet ...
	I0327 16:51:20.713589    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0327 16:51:20.729688    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:51:20.729781    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:51:20.749581    8959 logs.go:123] Gathering logs for kube-scheduler [9bf0505a569f] ...
	I0327 16:51:20.749586    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf0505a569f"
	I0327 16:51:20.764560    8959 logs.go:123] Gathering logs for dmesg ...
	I0327 16:51:20.764573    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:51:20.770251    8959 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:51:20.770262    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:51:20.804843    8959 logs.go:123] Gathering logs for kube-apiserver [468c26aa74b2] ...
	I0327 16:51:20.804856    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 468c26aa74b2"
	I0327 16:51:20.818731    8959 logs.go:123] Gathering logs for coredns [c5312da391dc] ...
	I0327 16:51:20.818739    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5312da391dc"
	I0327 16:51:20.829727    8959 logs.go:123] Gathering logs for coredns [4d5fe04a2de1] ...
	I0327 16:51:20.829736    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5fe04a2de1"
	I0327 16:51:20.851984    8959 logs.go:123] Gathering logs for kube-controller-manager [938134fd49c1] ...
	I0327 16:51:20.851998    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 938134fd49c1"
	I0327 16:51:20.869534    8959 logs.go:123] Gathering logs for storage-provisioner [24d43651f94a] ...
	I0327 16:51:20.869544    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24d43651f94a"
	I0327 16:51:20.881461    8959 logs.go:123] Gathering logs for Docker ...
	I0327 16:51:20.881473    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:51:20.905931    8959 logs.go:123] Gathering logs for container status ...
	I0327 16:51:20.905939    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:51:20.917805    8959 logs.go:123] Gathering logs for etcd [3dc8a850726c] ...
	I0327 16:51:20.917817    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dc8a850726c"
	I0327 16:51:20.931657    8959 logs.go:123] Gathering logs for coredns [e91004f12c96] ...
	I0327 16:51:20.931665    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e91004f12c96"
	I0327 16:51:20.947976    8959 logs.go:123] Gathering logs for coredns [e4e006d1c1aa] ...
	I0327 16:51:20.947985    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e006d1c1aa"
	I0327 16:51:20.964142    8959 logs.go:123] Gathering logs for kube-proxy [7ceb3f2f4d36] ...
	I0327 16:51:20.964152    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ceb3f2f4d36"
	I0327 16:51:20.975945    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:51:20.975958    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 16:51:20.975984    8959 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0327 16:51:20.975989    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	  Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:51:20.975992    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	  Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:51:20.975996    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:51:20.975999    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:51:30.979923    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:51:35.981462    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:51:35.981880    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:51:36.021335    8959 logs.go:276] 1 containers: [468c26aa74b2]
	I0327 16:51:36.021497    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:51:36.043386    8959 logs.go:276] 1 containers: [3dc8a850726c]
	I0327 16:51:36.043519    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:51:36.058833    8959 logs.go:276] 4 containers: [4d5fe04a2de1 c5312da391dc e91004f12c96 e4e006d1c1aa]
	I0327 16:51:36.058938    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:51:36.071914    8959 logs.go:276] 1 containers: [9bf0505a569f]
	I0327 16:51:36.071999    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:51:36.082940    8959 logs.go:276] 1 containers: [7ceb3f2f4d36]
	I0327 16:51:36.083011    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:51:36.093785    8959 logs.go:276] 1 containers: [938134fd49c1]
	I0327 16:51:36.093845    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:51:36.103796    8959 logs.go:276] 0 containers: []
	W0327 16:51:36.103811    8959 logs.go:278] No container was found matching "kindnet"
	I0327 16:51:36.103883    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:51:36.114703    8959 logs.go:276] 1 containers: [24d43651f94a]
	I0327 16:51:36.114723    8959 logs.go:123] Gathering logs for kubelet ...
	I0327 16:51:36.114729    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0327 16:51:36.133163    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:51:36.133256    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:51:36.153184    8959 logs.go:123] Gathering logs for etcd [3dc8a850726c] ...
	I0327 16:51:36.153189    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dc8a850726c"
	I0327 16:51:36.166999    8959 logs.go:123] Gathering logs for kube-proxy [7ceb3f2f4d36] ...
	I0327 16:51:36.167009    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ceb3f2f4d36"
	I0327 16:51:36.178548    8959 logs.go:123] Gathering logs for kube-controller-manager [938134fd49c1] ...
	I0327 16:51:36.178558    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 938134fd49c1"
	I0327 16:51:36.200116    8959 logs.go:123] Gathering logs for container status ...
	I0327 16:51:36.200127    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:51:36.212369    8959 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:51:36.212378    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:51:36.248680    8959 logs.go:123] Gathering logs for kube-apiserver [468c26aa74b2] ...
	I0327 16:51:36.248689    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 468c26aa74b2"
	I0327 16:51:36.263296    8959 logs.go:123] Gathering logs for coredns [4d5fe04a2de1] ...
	I0327 16:51:36.263304    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5fe04a2de1"
	I0327 16:51:36.274529    8959 logs.go:123] Gathering logs for coredns [c5312da391dc] ...
	I0327 16:51:36.274538    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5312da391dc"
	I0327 16:51:36.286410    8959 logs.go:123] Gathering logs for storage-provisioner [24d43651f94a] ...
	I0327 16:51:36.286422    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24d43651f94a"
	I0327 16:51:36.298299    8959 logs.go:123] Gathering logs for Docker ...
	I0327 16:51:36.298309    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:51:36.322757    8959 logs.go:123] Gathering logs for dmesg ...
	I0327 16:51:36.322764    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:51:36.327392    8959 logs.go:123] Gathering logs for coredns [e91004f12c96] ...
	I0327 16:51:36.327399    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e91004f12c96"
	I0327 16:51:36.342747    8959 logs.go:123] Gathering logs for coredns [e4e006d1c1aa] ...
	I0327 16:51:36.342758    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e006d1c1aa"
	I0327 16:51:36.354068    8959 logs.go:123] Gathering logs for kube-scheduler [9bf0505a569f] ...
	I0327 16:51:36.354078    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf0505a569f"
	I0327 16:51:36.369124    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:51:36.369133    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 16:51:36.369155    8959 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0327 16:51:36.369161    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	  Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:51:36.369165    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	  Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:51:36.369169    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:51:36.369171    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:51:46.373088    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:51:51.375544    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:51:51.375727    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:51:51.400767    8959 logs.go:276] 1 containers: [468c26aa74b2]
	I0327 16:51:51.400884    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:51:51.417671    8959 logs.go:276] 1 containers: [3dc8a850726c]
	I0327 16:51:51.417757    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:51:51.430938    8959 logs.go:276] 4 containers: [4d5fe04a2de1 c5312da391dc e91004f12c96 e4e006d1c1aa]
	I0327 16:51:51.431010    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:51:51.442081    8959 logs.go:276] 1 containers: [9bf0505a569f]
	I0327 16:51:51.442145    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:51:51.453006    8959 logs.go:276] 1 containers: [7ceb3f2f4d36]
	I0327 16:51:51.453076    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:51:51.463802    8959 logs.go:276] 1 containers: [938134fd49c1]
	I0327 16:51:51.463863    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:51:51.474476    8959 logs.go:276] 0 containers: []
	W0327 16:51:51.474489    8959 logs.go:278] No container was found matching "kindnet"
	I0327 16:51:51.474540    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:51:51.485864    8959 logs.go:276] 1 containers: [24d43651f94a]
	I0327 16:51:51.485882    8959 logs.go:123] Gathering logs for dmesg ...
	I0327 16:51:51.485887    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:51:51.491490    8959 logs.go:123] Gathering logs for coredns [e91004f12c96] ...
	I0327 16:51:51.491498    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e91004f12c96"
	I0327 16:51:51.504033    8959 logs.go:123] Gathering logs for kube-controller-manager [938134fd49c1] ...
	I0327 16:51:51.504042    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 938134fd49c1"
	I0327 16:51:51.521249    8959 logs.go:123] Gathering logs for kubelet ...
	I0327 16:51:51.521260    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0327 16:51:51.538125    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:51:51.538220    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:51:51.557854    8959 logs.go:123] Gathering logs for kube-apiserver [468c26aa74b2] ...
	I0327 16:51:51.557861    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 468c26aa74b2"
	I0327 16:51:51.572892    8959 logs.go:123] Gathering logs for coredns [e4e006d1c1aa] ...
	I0327 16:51:51.572903    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e006d1c1aa"
	I0327 16:51:51.586084    8959 logs.go:123] Gathering logs for etcd [3dc8a850726c] ...
	I0327 16:51:51.586095    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dc8a850726c"
	I0327 16:51:51.600821    8959 logs.go:123] Gathering logs for coredns [4d5fe04a2de1] ...
	I0327 16:51:51.600833    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5fe04a2de1"
	I0327 16:51:51.613165    8959 logs.go:123] Gathering logs for Docker ...
	I0327 16:51:51.613176    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:51:51.637131    8959 logs.go:123] Gathering logs for container status ...
	I0327 16:51:51.637140    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:51:51.649424    8959 logs.go:123] Gathering logs for storage-provisioner [24d43651f94a] ...
	I0327 16:51:51.649437    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24d43651f94a"
	I0327 16:51:51.665104    8959 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:51:51.665114    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:51:51.708205    8959 logs.go:123] Gathering logs for coredns [c5312da391dc] ...
	I0327 16:51:51.708215    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5312da391dc"
	I0327 16:51:51.720326    8959 logs.go:123] Gathering logs for kube-scheduler [9bf0505a569f] ...
	I0327 16:51:51.720337    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf0505a569f"
	I0327 16:51:51.735193    8959 logs.go:123] Gathering logs for kube-proxy [7ceb3f2f4d36] ...
	I0327 16:51:51.735204    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ceb3f2f4d36"
	I0327 16:51:51.747143    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:51:51.747153    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 16:51:51.747179    8959 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0327 16:51:51.747183    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	  Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:51:51.747188    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	  Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:51:51.747193    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:51:51.747195    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:52:01.750204    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:52:06.752494    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:52:06.752558    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:52:06.764051    8959 logs.go:276] 1 containers: [468c26aa74b2]
	I0327 16:52:06.764111    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:52:06.775233    8959 logs.go:276] 1 containers: [3dc8a850726c]
	I0327 16:52:06.775319    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:52:06.791704    8959 logs.go:276] 4 containers: [4d5fe04a2de1 c5312da391dc e91004f12c96 e4e006d1c1aa]
	I0327 16:52:06.791790    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:52:06.803864    8959 logs.go:276] 1 containers: [9bf0505a569f]
	I0327 16:52:06.803908    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:52:06.814271    8959 logs.go:276] 1 containers: [7ceb3f2f4d36]
	I0327 16:52:06.814335    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:52:06.825626    8959 logs.go:276] 1 containers: [938134fd49c1]
	I0327 16:52:06.825682    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:52:06.837292    8959 logs.go:276] 0 containers: []
	W0327 16:52:06.837307    8959 logs.go:278] No container was found matching "kindnet"
	I0327 16:52:06.837377    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:52:06.850311    8959 logs.go:276] 1 containers: [24d43651f94a]
	I0327 16:52:06.850324    8959 logs.go:123] Gathering logs for kube-apiserver [468c26aa74b2] ...
	I0327 16:52:06.850328    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 468c26aa74b2"
	I0327 16:52:06.864817    8959 logs.go:123] Gathering logs for kube-proxy [7ceb3f2f4d36] ...
	I0327 16:52:06.864829    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ceb3f2f4d36"
	I0327 16:52:06.877386    8959 logs.go:123] Gathering logs for kube-controller-manager [938134fd49c1] ...
	I0327 16:52:06.877398    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 938134fd49c1"
	I0327 16:52:06.897260    8959 logs.go:123] Gathering logs for kubelet ...
	I0327 16:52:06.897269    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0327 16:52:06.913766    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:52:06.913867    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:52:06.934526    8959 logs.go:123] Gathering logs for dmesg ...
	I0327 16:52:06.934542    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:52:06.943051    8959 logs.go:123] Gathering logs for coredns [4d5fe04a2de1] ...
	I0327 16:52:06.943060    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5fe04a2de1"
	I0327 16:52:06.961377    8959 logs.go:123] Gathering logs for coredns [e4e006d1c1aa] ...
	I0327 16:52:06.961389    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e006d1c1aa"
	I0327 16:52:06.974072    8959 logs.go:123] Gathering logs for storage-provisioner [24d43651f94a] ...
	I0327 16:52:06.974083    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24d43651f94a"
	I0327 16:52:06.986383    8959 logs.go:123] Gathering logs for Docker ...
	I0327 16:52:06.986398    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:52:07.011124    8959 logs.go:123] Gathering logs for coredns [c5312da391dc] ...
	I0327 16:52:07.011141    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5312da391dc"
	I0327 16:52:07.023886    8959 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:52:07.023898    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:52:07.060084    8959 logs.go:123] Gathering logs for etcd [3dc8a850726c] ...
	I0327 16:52:07.060094    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dc8a850726c"
	I0327 16:52:07.075253    8959 logs.go:123] Gathering logs for coredns [e91004f12c96] ...
	I0327 16:52:07.075262    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e91004f12c96"
	I0327 16:52:07.089374    8959 logs.go:123] Gathering logs for kube-scheduler [9bf0505a569f] ...
	I0327 16:52:07.089384    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf0505a569f"
	I0327 16:52:07.105033    8959 logs.go:123] Gathering logs for container status ...
	I0327 16:52:07.105045    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:52:07.117183    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:52:07.117191    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 16:52:07.117217    8959 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0327 16:52:07.117221    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	  Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:52:07.117224    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	  Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:52:07.117229    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:52:07.117232    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:52:17.121144    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:52:22.123342    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:52:22.123817    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:52:22.166653    8959 logs.go:276] 1 containers: [468c26aa74b2]
	I0327 16:52:22.166783    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:52:22.192561    8959 logs.go:276] 1 containers: [3dc8a850726c]
	I0327 16:52:22.192659    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:52:22.206935    8959 logs.go:276] 4 containers: [4d5fe04a2de1 c5312da391dc e91004f12c96 e4e006d1c1aa]
	I0327 16:52:22.207019    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:52:22.219633    8959 logs.go:276] 1 containers: [9bf0505a569f]
	I0327 16:52:22.219699    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:52:22.230729    8959 logs.go:276] 1 containers: [7ceb3f2f4d36]
	I0327 16:52:22.230798    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:52:22.241502    8959 logs.go:276] 1 containers: [938134fd49c1]
	I0327 16:52:22.241561    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:52:22.252474    8959 logs.go:276] 0 containers: []
	W0327 16:52:22.252485    8959 logs.go:278] No container was found matching "kindnet"
	I0327 16:52:22.252539    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:52:22.269262    8959 logs.go:276] 1 containers: [24d43651f94a]
	I0327 16:52:22.269282    8959 logs.go:123] Gathering logs for etcd [3dc8a850726c] ...
	I0327 16:52:22.269288    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dc8a850726c"
	I0327 16:52:22.303969    8959 logs.go:123] Gathering logs for coredns [4d5fe04a2de1] ...
	I0327 16:52:22.303981    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5fe04a2de1"
	I0327 16:52:22.316475    8959 logs.go:123] Gathering logs for coredns [c5312da391dc] ...
	I0327 16:52:22.316482    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5312da391dc"
	I0327 16:52:22.328417    8959 logs.go:123] Gathering logs for coredns [e4e006d1c1aa] ...
	I0327 16:52:22.328446    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e006d1c1aa"
	I0327 16:52:22.340486    8959 logs.go:123] Gathering logs for container status ...
	I0327 16:52:22.340496    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:52:22.353077    8959 logs.go:123] Gathering logs for dmesg ...
	I0327 16:52:22.353088    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:52:22.357426    8959 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:52:22.357434    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:52:22.393629    8959 logs.go:123] Gathering logs for kube-apiserver [468c26aa74b2] ...
	I0327 16:52:22.393642    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 468c26aa74b2"
	I0327 16:52:22.408359    8959 logs.go:123] Gathering logs for kube-scheduler [9bf0505a569f] ...
	I0327 16:52:22.408371    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf0505a569f"
	I0327 16:52:22.423602    8959 logs.go:123] Gathering logs for kube-controller-manager [938134fd49c1] ...
	I0327 16:52:22.423614    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 938134fd49c1"
	I0327 16:52:22.441943    8959 logs.go:123] Gathering logs for kubelet ...
	I0327 16:52:22.441953    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0327 16:52:22.460414    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:52:22.460507    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:52:22.480503    8959 logs.go:123] Gathering logs for coredns [e91004f12c96] ...
	I0327 16:52:22.480509    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e91004f12c96"
	I0327 16:52:22.492660    8959 logs.go:123] Gathering logs for kube-proxy [7ceb3f2f4d36] ...
	I0327 16:52:22.492674    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ceb3f2f4d36"
	I0327 16:52:22.504778    8959 logs.go:123] Gathering logs for storage-provisioner [24d43651f94a] ...
	I0327 16:52:22.504789    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24d43651f94a"
	I0327 16:52:22.516384    8959 logs.go:123] Gathering logs for Docker ...
	I0327 16:52:22.516395    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:52:22.540536    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:52:22.540546    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 16:52:22.540568    8959 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0327 16:52:22.540572    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	  Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:52:22.540575    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	  Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:52:22.540581    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:52:22.540584    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:52:32.544399    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:52:37.546680    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:52:37.547087    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:52:37.580979    8959 logs.go:276] 1 containers: [468c26aa74b2]
	I0327 16:52:37.581101    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:52:37.601277    8959 logs.go:276] 1 containers: [3dc8a850726c]
	I0327 16:52:37.601359    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:52:37.616977    8959 logs.go:276] 4 containers: [4d5fe04a2de1 c5312da391dc e91004f12c96 e4e006d1c1aa]
	I0327 16:52:37.617061    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:52:37.629495    8959 logs.go:276] 1 containers: [9bf0505a569f]
	I0327 16:52:37.629555    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:52:37.643570    8959 logs.go:276] 1 containers: [7ceb3f2f4d36]
	I0327 16:52:37.643636    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:52:37.655246    8959 logs.go:276] 1 containers: [938134fd49c1]
	I0327 16:52:37.655315    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:52:37.666374    8959 logs.go:276] 0 containers: []
	W0327 16:52:37.666387    8959 logs.go:278] No container was found matching "kindnet"
	I0327 16:52:37.666441    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:52:37.678110    8959 logs.go:276] 1 containers: [24d43651f94a]
	I0327 16:52:37.678129    8959 logs.go:123] Gathering logs for etcd [3dc8a850726c] ...
	I0327 16:52:37.678134    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dc8a850726c"
	I0327 16:52:37.692996    8959 logs.go:123] Gathering logs for coredns [4d5fe04a2de1] ...
	I0327 16:52:37.693010    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5fe04a2de1"
	I0327 16:52:37.705518    8959 logs.go:123] Gathering logs for coredns [c5312da391dc] ...
	I0327 16:52:37.705529    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5312da391dc"
	I0327 16:52:37.718396    8959 logs.go:123] Gathering logs for storage-provisioner [24d43651f94a] ...
	I0327 16:52:37.718405    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24d43651f94a"
	I0327 16:52:37.731120    8959 logs.go:123] Gathering logs for container status ...
	I0327 16:52:37.731134    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:52:37.743551    8959 logs.go:123] Gathering logs for kubelet ...
	I0327 16:52:37.743560    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0327 16:52:37.760890    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:52:37.760983    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:52:37.781316    8959 logs.go:123] Gathering logs for kube-apiserver [468c26aa74b2] ...
	I0327 16:52:37.781326    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 468c26aa74b2"
	I0327 16:52:37.796431    8959 logs.go:123] Gathering logs for kube-scheduler [9bf0505a569f] ...
	I0327 16:52:37.796442    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf0505a569f"
	I0327 16:52:37.811995    8959 logs.go:123] Gathering logs for kube-proxy [7ceb3f2f4d36] ...
	I0327 16:52:37.812007    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ceb3f2f4d36"
	I0327 16:52:37.828574    8959 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:52:37.828584    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:52:37.867492    8959 logs.go:123] Gathering logs for Docker ...
	I0327 16:52:37.867503    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:52:37.891484    8959 logs.go:123] Gathering logs for dmesg ...
	I0327 16:52:37.891494    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:52:37.896366    8959 logs.go:123] Gathering logs for coredns [e91004f12c96] ...
	I0327 16:52:37.896373    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e91004f12c96"
	I0327 16:52:37.908650    8959 logs.go:123] Gathering logs for coredns [e4e006d1c1aa] ...
	I0327 16:52:37.908660    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e006d1c1aa"
	I0327 16:52:37.921505    8959 logs.go:123] Gathering logs for kube-controller-manager [938134fd49c1] ...
	I0327 16:52:37.921517    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 938134fd49c1"
	I0327 16:52:37.940166    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:52:37.940175    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 16:52:37.940199    8959 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0327 16:52:37.940204    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	  Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:52:37.940208    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	  Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:52:37.940212    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:52:37.940214    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:52:47.944098    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:52:52.946397    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:52:52.946867    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0327 16:52:52.991118    8959 logs.go:276] 1 containers: [468c26aa74b2]
	I0327 16:52:52.991250    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0327 16:52:53.013422    8959 logs.go:276] 1 containers: [3dc8a850726c]
	I0327 16:52:53.013509    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0327 16:52:53.029173    8959 logs.go:276] 4 containers: [4d5fe04a2de1 c5312da391dc e91004f12c96 e4e006d1c1aa]
	I0327 16:52:53.029251    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0327 16:52:53.042990    8959 logs.go:276] 1 containers: [9bf0505a569f]
	I0327 16:52:53.043063    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0327 16:52:53.054270    8959 logs.go:276] 1 containers: [7ceb3f2f4d36]
	I0327 16:52:53.054334    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0327 16:52:53.065657    8959 logs.go:276] 1 containers: [938134fd49c1]
	I0327 16:52:53.065725    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0327 16:52:53.077104    8959 logs.go:276] 0 containers: []
	W0327 16:52:53.077118    8959 logs.go:278] No container was found matching "kindnet"
	I0327 16:52:53.077175    8959 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0327 16:52:53.088152    8959 logs.go:276] 1 containers: [24d43651f94a]
	I0327 16:52:53.088171    8959 logs.go:123] Gathering logs for kubelet ...
	I0327 16:52:53.088176    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0327 16:52:53.105235    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:52:53.105329    8959 logs.go:138] Found kubelet problem: Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:52:53.124963    8959 logs.go:123] Gathering logs for dmesg ...
	I0327 16:52:53.124967    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0327 16:52:53.129034    8959 logs.go:123] Gathering logs for kube-proxy [7ceb3f2f4d36] ...
	I0327 16:52:53.129042    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ceb3f2f4d36"
	I0327 16:52:53.142060    8959 logs.go:123] Gathering logs for kube-controller-manager [938134fd49c1] ...
	I0327 16:52:53.142069    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 938134fd49c1"
	I0327 16:52:53.162903    8959 logs.go:123] Gathering logs for storage-provisioner [24d43651f94a] ...
	I0327 16:52:53.162915    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 24d43651f94a"
	I0327 16:52:53.175154    8959 logs.go:123] Gathering logs for Docker ...
	I0327 16:52:53.175167    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0327 16:52:53.198161    8959 logs.go:123] Gathering logs for etcd [3dc8a850726c] ...
	I0327 16:52:53.198168    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3dc8a850726c"
	I0327 16:52:53.212604    8959 logs.go:123] Gathering logs for coredns [e91004f12c96] ...
	I0327 16:52:53.212615    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e91004f12c96"
	I0327 16:52:53.224492    8959 logs.go:123] Gathering logs for kube-scheduler [9bf0505a569f] ...
	I0327 16:52:53.224504    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9bf0505a569f"
	I0327 16:52:53.240056    8959 logs.go:123] Gathering logs for describe nodes ...
	I0327 16:52:53.240068    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0327 16:52:53.297463    8959 logs.go:123] Gathering logs for kube-apiserver [468c26aa74b2] ...
	I0327 16:52:53.297476    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 468c26aa74b2"
	I0327 16:52:53.312685    8959 logs.go:123] Gathering logs for coredns [4d5fe04a2de1] ...
	I0327 16:52:53.312697    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4d5fe04a2de1"
	I0327 16:52:53.325344    8959 logs.go:123] Gathering logs for coredns [c5312da391dc] ...
	I0327 16:52:53.325357    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c5312da391dc"
	I0327 16:52:53.337634    8959 logs.go:123] Gathering logs for coredns [e4e006d1c1aa] ...
	I0327 16:52:53.337648    8959 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e4e006d1c1aa"
	I0327 16:52:53.349709    8959 logs.go:123] Gathering logs for container status ...
	I0327 16:52:53.349722    8959 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0327 16:52:53.364800    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:52:53.364814    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0327 16:52:53.364843    8959 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0327 16:52:53.364847    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	  Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: W0327 23:45:09.218698    1676 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	W0327 16:52:53.364850    8959 out.go:239]   Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	  Mar 27 23:45:09 stopped-upgrade-017000 kubelet[1676]: E0327 23:45:09.218752    1676 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:stopped-upgrade-017000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'stopped-upgrade-017000' and this object
	I0327 16:52:53.364854    8959 out.go:304] Setting ErrFile to fd 2...
	I0327 16:52:53.364857    8959 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:53:03.367843    8959 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0327 16:53:08.370338    8959 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0327 16:53:08.374515    8959 out.go:177] 
	W0327 16:53:08.382543    8959 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0327 16:53:08.382551    8959 out.go:239] * 
	* 
	W0327 16:53:08.382997    8959 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 16:53:08.391482    8959 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-017000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (586.77s)

                                                
                                    
x
+
TestPause/serial/Start (9.83s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-296000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-296000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.777693375s)

                                                
                                                
-- stdout --
	* [pause-296000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-296000" primary control-plane node in "pause-296000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-296000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-296000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-296000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-296000 -n pause-296000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-296000 -n pause-296000: exit status 7 (53.5485ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-296000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (9.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-222000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-222000 --driver=qemu2 : exit status 80 (9.793148167s)

                                                
                                                
-- stdout --
	* [NoKubernetes-222000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-222000" primary control-plane node in "NoKubernetes-222000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-222000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-222000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-222000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-222000 -n NoKubernetes-222000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-222000 -n NoKubernetes-222000: exit status 7 (67.4605ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-222000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (9.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-222000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-222000 --no-kubernetes --driver=qemu2 : exit status 80 (5.825039041s)

                                                
                                                
-- stdout --
	* [NoKubernetes-222000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-222000
	* Restarting existing qemu2 VM for "NoKubernetes-222000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-222000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-222000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-222000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-222000 -n NoKubernetes-222000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-222000 -n NoKubernetes-222000: exit status 7 (57.346042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-222000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-222000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-222000 --no-kubernetes --driver=qemu2 : exit status 80 (5.852179875s)

                                                
                                                
-- stdout --
	* [NoKubernetes-222000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-222000
	* Restarting existing qemu2 VM for "NoKubernetes-222000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-222000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-222000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-222000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-222000 -n NoKubernetes-222000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-222000 -n NoKubernetes-222000: exit status 7 (51.913791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-222000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-222000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-222000 --driver=qemu2 : exit status 80 (5.861824875s)

                                                
                                                
-- stdout --
	* [NoKubernetes-222000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-222000
	* Restarting existing qemu2 VM for "NoKubernetes-222000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-222000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-222000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-222000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-222000 -n NoKubernetes-222000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-222000 -n NoKubernetes-222000: exit status 7 (65.588916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-222000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (10.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-244000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-244000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (10.072831s)

                                                
                                                
-- stdout --
	* [auto-244000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-244000" primary control-plane node in "auto-244000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-244000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:51:30.831552    9240 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:51:30.831698    9240 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:51:30.831704    9240 out.go:304] Setting ErrFile to fd 2...
	I0327 16:51:30.831706    9240 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:51:30.832030    9240 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:51:30.833290    9240 out.go:298] Setting JSON to false
	I0327 16:51:30.850364    9240 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6661,"bootTime":1711576829,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 16:51:30.850435    9240 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 16:51:30.855627    9240 out.go:177] * [auto-244000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 16:51:30.862743    9240 out.go:177]   - MINIKUBE_LOCATION=18485
	I0327 16:51:30.862783    9240 notify.go:220] Checking for updates...
	I0327 16:51:30.869666    9240 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:51:30.872736    9240 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 16:51:30.874100    9240 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 16:51:30.876667    9240 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	I0327 16:51:30.879764    9240 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 16:51:30.883100    9240 config.go:182] Loaded profile config "multinode-266000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:51:30.883172    9240 config.go:182] Loaded profile config "stopped-upgrade-017000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 16:51:30.883231    9240 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 16:51:30.887668    9240 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 16:51:30.894736    9240 start.go:297] selected driver: qemu2
	I0327 16:51:30.894743    9240 start.go:901] validating driver "qemu2" against <nil>
	I0327 16:51:30.894749    9240 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 16:51:30.897030    9240 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 16:51:30.899736    9240 out.go:177] * Automatically selected the socket_vmnet network
	I0327 16:51:30.902799    9240 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 16:51:30.902847    9240 cni.go:84] Creating CNI manager for ""
	I0327 16:51:30.902854    9240 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 16:51:30.902858    9240 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 16:51:30.902886    9240 start.go:340] cluster config:
	{Name:auto-244000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:auto-244000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:51:30.907352    9240 iso.go:125] acquiring lock: {Name:mk26a7f41e004242919e6d5076ec0f9645f820ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:51:30.914680    9240 out.go:177] * Starting "auto-244000" primary control-plane node in "auto-244000" cluster
	I0327 16:51:30.918717    9240 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 16:51:30.918733    9240 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 16:51:30.918737    9240 cache.go:56] Caching tarball of preloaded images
	I0327 16:51:30.918809    9240 preload.go:173] Found /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 16:51:30.918816    9240 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 16:51:30.918895    9240 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/auto-244000/config.json ...
	I0327 16:51:30.918907    9240 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/auto-244000/config.json: {Name:mk81208d578e3498b0dc5c91c2412ba29e3da76b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 16:51:30.919122    9240 start.go:360] acquireMachinesLock for auto-244000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:51:30.919152    9240 start.go:364] duration metric: took 24.834µs to acquireMachinesLock for "auto-244000"
	I0327 16:51:30.919165    9240 start.go:93] Provisioning new machine with config: &{Name:auto-244000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.29.3 ClusterName:auto-244000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 16:51:30.919193    9240 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 16:51:30.927731    9240 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 16:51:30.944642    9240 start.go:159] libmachine.API.Create for "auto-244000" (driver="qemu2")
	I0327 16:51:30.944678    9240 client.go:168] LocalClient.Create starting
	I0327 16:51:30.944738    9240 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem
	I0327 16:51:30.944768    9240 main.go:141] libmachine: Decoding PEM data...
	I0327 16:51:30.944781    9240 main.go:141] libmachine: Parsing certificate...
	I0327 16:51:30.944829    9240 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem
	I0327 16:51:30.944851    9240 main.go:141] libmachine: Decoding PEM data...
	I0327 16:51:30.944861    9240 main.go:141] libmachine: Parsing certificate...
	I0327 16:51:30.945229    9240 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18485-6511/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0327 16:51:31.085000    9240 main.go:141] libmachine: Creating SSH key...
	I0327 16:51:31.400176    9240 main.go:141] libmachine: Creating Disk image...
	I0327 16:51:31.400188    9240 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 16:51:31.400379    9240 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/auto-244000/disk.qcow2.raw /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/auto-244000/disk.qcow2
	I0327 16:51:31.413175    9240 main.go:141] libmachine: STDOUT: 
	I0327 16:51:31.413198    9240 main.go:141] libmachine: STDERR: 
	I0327 16:51:31.413262    9240 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/auto-244000/disk.qcow2 +20000M
	I0327 16:51:31.424306    9240 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 16:51:31.424329    9240 main.go:141] libmachine: STDERR: 
	I0327 16:51:31.424347    9240 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/auto-244000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/auto-244000/disk.qcow2
	I0327 16:51:31.424352    9240 main.go:141] libmachine: Starting QEMU VM...
	I0327 16:51:31.424385    9240 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/auto-244000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/auto-244000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/auto-244000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:7a:a3:be:06:78 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/auto-244000/disk.qcow2
	I0327 16:51:31.426149    9240 main.go:141] libmachine: STDOUT: 
	I0327 16:51:31.426166    9240 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:51:31.426190    9240 client.go:171] duration metric: took 481.522666ms to LocalClient.Create
	I0327 16:51:33.428352    9240 start.go:128] duration metric: took 2.509213541s to createHost
	I0327 16:51:33.428425    9240 start.go:83] releasing machines lock for "auto-244000", held for 2.509345625s
	W0327 16:51:33.428508    9240 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:51:33.439699    9240 out.go:177] * Deleting "auto-244000" in qemu2 ...
	W0327 16:51:33.467729    9240 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:51:33.467760    9240 start.go:728] Will try again in 5 seconds ...
	I0327 16:51:38.469882    9240 start.go:360] acquireMachinesLock for auto-244000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:51:38.470335    9240 start.go:364] duration metric: took 319.292µs to acquireMachinesLock for "auto-244000"
	I0327 16:51:38.470475    9240 start.go:93] Provisioning new machine with config: &{Name:auto-244000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.29.3 ClusterName:auto-244000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 16:51:38.470732    9240 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 16:51:38.478992    9240 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 16:51:38.526113    9240 start.go:159] libmachine.API.Create for "auto-244000" (driver="qemu2")
	I0327 16:51:38.526158    9240 client.go:168] LocalClient.Create starting
	I0327 16:51:38.526297    9240 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem
	I0327 16:51:38.526368    9240 main.go:141] libmachine: Decoding PEM data...
	I0327 16:51:38.526387    9240 main.go:141] libmachine: Parsing certificate...
	I0327 16:51:38.526450    9240 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem
	I0327 16:51:38.526494    9240 main.go:141] libmachine: Decoding PEM data...
	I0327 16:51:38.526507    9240 main.go:141] libmachine: Parsing certificate...
	I0327 16:51:38.526998    9240 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18485-6511/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0327 16:51:38.693974    9240 main.go:141] libmachine: Creating SSH key...
	I0327 16:51:38.807595    9240 main.go:141] libmachine: Creating Disk image...
	I0327 16:51:38.807608    9240 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 16:51:38.807826    9240 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/auto-244000/disk.qcow2.raw /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/auto-244000/disk.qcow2
	I0327 16:51:38.820859    9240 main.go:141] libmachine: STDOUT: 
	I0327 16:51:38.820886    9240 main.go:141] libmachine: STDERR: 
	I0327 16:51:38.820949    9240 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/auto-244000/disk.qcow2 +20000M
	I0327 16:51:38.832640    9240 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 16:51:38.832663    9240 main.go:141] libmachine: STDERR: 
	I0327 16:51:38.832674    9240 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/auto-244000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/auto-244000/disk.qcow2
	I0327 16:51:38.832678    9240 main.go:141] libmachine: Starting QEMU VM...
	I0327 16:51:38.832729    9240 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/auto-244000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/auto-244000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/auto-244000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:d7:9f:6c:2b:ef -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/auto-244000/disk.qcow2
	I0327 16:51:38.834649    9240 main.go:141] libmachine: STDOUT: 
	I0327 16:51:38.834666    9240 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:51:38.834679    9240 client.go:171] duration metric: took 308.524958ms to LocalClient.Create
	I0327 16:51:40.834999    9240 start.go:128] duration metric: took 2.364300792s to createHost
	I0327 16:51:40.838333    9240 start.go:83] releasing machines lock for "auto-244000", held for 2.368036667s
	W0327 16:51:40.838459    9240 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-244000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-244000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:51:40.848214    9240 out.go:177] 
	W0327 16:51:40.853298    9240 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 16:51:40.853308    9240 out.go:239] * 
	* 
	W0327 16:51:40.854074    9240 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 16:51:40.869251    9240 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (10.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-244000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-244000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.933037s)

                                                
                                                
-- stdout --
	* [kindnet-244000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-244000" primary control-plane node in "kindnet-244000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-244000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:51:43.257629    9354 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:51:43.257764    9354 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:51:43.257767    9354 out.go:304] Setting ErrFile to fd 2...
	I0327 16:51:43.257770    9354 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:51:43.257882    9354 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:51:43.258922    9354 out.go:298] Setting JSON to false
	I0327 16:51:43.276367    9354 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6674,"bootTime":1711576829,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 16:51:43.276438    9354 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 16:51:43.280580    9354 out.go:177] * [kindnet-244000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 16:51:43.286519    9354 out.go:177]   - MINIKUBE_LOCATION=18485
	I0327 16:51:43.290547    9354 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:51:43.286614    9354 notify.go:220] Checking for updates...
	I0327 16:51:43.296567    9354 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 16:51:43.299567    9354 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 16:51:43.300812    9354 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	I0327 16:51:43.303511    9354 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 16:51:43.306875    9354 config.go:182] Loaded profile config "multinode-266000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:51:43.306936    9354 config.go:182] Loaded profile config "stopped-upgrade-017000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 16:51:43.306977    9354 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 16:51:43.311382    9354 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 16:51:43.318506    9354 start.go:297] selected driver: qemu2
	I0327 16:51:43.318511    9354 start.go:901] validating driver "qemu2" against <nil>
	I0327 16:51:43.318516    9354 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 16:51:43.320746    9354 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 16:51:43.324569    9354 out.go:177] * Automatically selected the socket_vmnet network
	I0327 16:51:43.327589    9354 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 16:51:43.327625    9354 cni.go:84] Creating CNI manager for "kindnet"
	I0327 16:51:43.327631    9354 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0327 16:51:43.327665    9354 start.go:340] cluster config:
	{Name:kindnet-244000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:kindnet-244000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:51:43.331811    9354 iso.go:125] acquiring lock: {Name:mk26a7f41e004242919e6d5076ec0f9645f820ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:51:43.338460    9354 out.go:177] * Starting "kindnet-244000" primary control-plane node in "kindnet-244000" cluster
	I0327 16:51:43.342421    9354 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 16:51:43.342435    9354 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 16:51:43.342440    9354 cache.go:56] Caching tarball of preloaded images
	I0327 16:51:43.342492    9354 preload.go:173] Found /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 16:51:43.342498    9354 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 16:51:43.342554    9354 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/kindnet-244000/config.json ...
	I0327 16:51:43.342564    9354 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/kindnet-244000/config.json: {Name:mk7dcb20a8f92b0ce3b4b38f1592cc3166102501 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 16:51:43.342809    9354 start.go:360] acquireMachinesLock for kindnet-244000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:51:43.342838    9354 start.go:364] duration metric: took 23.958µs to acquireMachinesLock for "kindnet-244000"
	I0327 16:51:43.342850    9354 start.go:93] Provisioning new machine with config: &{Name:kindnet-244000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:kindnet-244000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 16:51:43.342887    9354 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 16:51:43.350448    9354 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 16:51:43.366689    9354 start.go:159] libmachine.API.Create for "kindnet-244000" (driver="qemu2")
	I0327 16:51:43.366714    9354 client.go:168] LocalClient.Create starting
	I0327 16:51:43.366771    9354 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem
	I0327 16:51:43.366798    9354 main.go:141] libmachine: Decoding PEM data...
	I0327 16:51:43.366811    9354 main.go:141] libmachine: Parsing certificate...
	I0327 16:51:43.366856    9354 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem
	I0327 16:51:43.366877    9354 main.go:141] libmachine: Decoding PEM data...
	I0327 16:51:43.366885    9354 main.go:141] libmachine: Parsing certificate...
	I0327 16:51:43.367260    9354 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18485-6511/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0327 16:51:43.550624    9354 main.go:141] libmachine: Creating SSH key...
	I0327 16:51:43.607474    9354 main.go:141] libmachine: Creating Disk image...
	I0327 16:51:43.607483    9354 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 16:51:43.607656    9354 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kindnet-244000/disk.qcow2.raw /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kindnet-244000/disk.qcow2
	I0327 16:51:43.620051    9354 main.go:141] libmachine: STDOUT: 
	I0327 16:51:43.620068    9354 main.go:141] libmachine: STDERR: 
	I0327 16:51:43.620127    9354 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kindnet-244000/disk.qcow2 +20000M
	I0327 16:51:43.631740    9354 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 16:51:43.631757    9354 main.go:141] libmachine: STDERR: 
	I0327 16:51:43.631779    9354 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kindnet-244000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kindnet-244000/disk.qcow2
	I0327 16:51:43.631783    9354 main.go:141] libmachine: Starting QEMU VM...
	I0327 16:51:43.631812    9354 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kindnet-244000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kindnet-244000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kindnet-244000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:0a:6f:b0:db:22 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kindnet-244000/disk.qcow2
	I0327 16:51:43.633668    9354 main.go:141] libmachine: STDOUT: 
	I0327 16:51:43.633681    9354 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:51:43.633700    9354 client.go:171] duration metric: took 266.990084ms to LocalClient.Create
	I0327 16:51:45.635908    9354 start.go:128] duration metric: took 2.293069791s to createHost
	I0327 16:51:45.636009    9354 start.go:83] releasing machines lock for "kindnet-244000", held for 2.293236208s
	W0327 16:51:45.636067    9354 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:51:45.648161    9354 out.go:177] * Deleting "kindnet-244000" in qemu2 ...
	W0327 16:51:45.673354    9354 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:51:45.673402    9354 start.go:728] Will try again in 5 seconds ...
	I0327 16:51:50.675421    9354 start.go:360] acquireMachinesLock for kindnet-244000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:51:50.675807    9354 start.go:364] duration metric: took 281.084µs to acquireMachinesLock for "kindnet-244000"
	I0327 16:51:50.675907    9354 start.go:93] Provisioning new machine with config: &{Name:kindnet-244000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:kindnet-244000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 16:51:50.676124    9354 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 16:51:50.685169    9354 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 16:51:50.734872    9354 start.go:159] libmachine.API.Create for "kindnet-244000" (driver="qemu2")
	I0327 16:51:50.734913    9354 client.go:168] LocalClient.Create starting
	I0327 16:51:50.735017    9354 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem
	I0327 16:51:50.735093    9354 main.go:141] libmachine: Decoding PEM data...
	I0327 16:51:50.735114    9354 main.go:141] libmachine: Parsing certificate...
	I0327 16:51:50.735176    9354 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem
	I0327 16:51:50.735218    9354 main.go:141] libmachine: Decoding PEM data...
	I0327 16:51:50.735234    9354 main.go:141] libmachine: Parsing certificate...
	I0327 16:51:50.735787    9354 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18485-6511/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0327 16:51:50.907686    9354 main.go:141] libmachine: Creating SSH key...
	I0327 16:51:51.089636    9354 main.go:141] libmachine: Creating Disk image...
	I0327 16:51:51.089644    9354 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 16:51:51.089819    9354 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kindnet-244000/disk.qcow2.raw /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kindnet-244000/disk.qcow2
	I0327 16:51:51.102841    9354 main.go:141] libmachine: STDOUT: 
	I0327 16:51:51.102875    9354 main.go:141] libmachine: STDERR: 
	I0327 16:51:51.102940    9354 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kindnet-244000/disk.qcow2 +20000M
	I0327 16:51:51.114035    9354 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 16:51:51.114075    9354 main.go:141] libmachine: STDERR: 
	I0327 16:51:51.114087    9354 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kindnet-244000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kindnet-244000/disk.qcow2
	I0327 16:51:51.114092    9354 main.go:141] libmachine: Starting QEMU VM...
	I0327 16:51:51.114125    9354 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kindnet-244000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kindnet-244000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kindnet-244000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:de:19:0c:65:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kindnet-244000/disk.qcow2
	I0327 16:51:51.115889    9354 main.go:141] libmachine: STDOUT: 
	I0327 16:51:51.115911    9354 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:51:51.115924    9354 client.go:171] duration metric: took 381.019125ms to LocalClient.Create
	I0327 16:51:53.118154    9354 start.go:128] duration metric: took 2.442070583s to createHost
	I0327 16:51:53.118240    9354 start.go:83] releasing machines lock for "kindnet-244000", held for 2.442490625s
	W0327 16:51:53.118629    9354 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-244000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-244000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:51:53.128322    9354 out.go:177] 
	W0327 16:51:53.132520    9354 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 16:51:53.132590    9354 out.go:239] * 
	* 
	W0327 16:51:53.135245    9354 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 16:51:53.146408    9354 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-244000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-244000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.916539083s)

                                                
                                                
-- stdout --
	* [flannel-244000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-244000" primary control-plane node in "flannel-244000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-244000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:51:55.568836    9468 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:51:55.568964    9468 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:51:55.568968    9468 out.go:304] Setting ErrFile to fd 2...
	I0327 16:51:55.568970    9468 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:51:55.569106    9468 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:51:55.570265    9468 out.go:298] Setting JSON to false
	I0327 16:51:55.589708    9468 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6686,"bootTime":1711576829,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 16:51:55.589782    9468 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 16:51:55.595278    9468 out.go:177] * [flannel-244000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 16:51:55.603118    9468 out.go:177]   - MINIKUBE_LOCATION=18485
	I0327 16:51:55.603156    9468 notify.go:220] Checking for updates...
	I0327 16:51:55.610100    9468 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:51:55.613090    9468 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 16:51:55.616104    9468 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 16:51:55.619141    9468 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	I0327 16:51:55.622073    9468 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 16:51:55.625458    9468 config.go:182] Loaded profile config "multinode-266000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:51:55.625528    9468 config.go:182] Loaded profile config "stopped-upgrade-017000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 16:51:55.625593    9468 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 16:51:55.630086    9468 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 16:51:55.637081    9468 start.go:297] selected driver: qemu2
	I0327 16:51:55.637087    9468 start.go:901] validating driver "qemu2" against <nil>
	I0327 16:51:55.637093    9468 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 16:51:55.639630    9468 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 16:51:55.643095    9468 out.go:177] * Automatically selected the socket_vmnet network
	I0327 16:51:55.644642    9468 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 16:51:55.644680    9468 cni.go:84] Creating CNI manager for "flannel"
	I0327 16:51:55.644685    9468 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0327 16:51:55.644719    9468 start.go:340] cluster config:
	{Name:flannel-244000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:flannel-244000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:51:55.650008    9468 iso.go:125] acquiring lock: {Name:mk26a7f41e004242919e6d5076ec0f9645f820ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:51:55.657099    9468 out.go:177] * Starting "flannel-244000" primary control-plane node in "flannel-244000" cluster
	I0327 16:51:55.661048    9468 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 16:51:55.661061    9468 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 16:51:55.661067    9468 cache.go:56] Caching tarball of preloaded images
	I0327 16:51:55.661116    9468 preload.go:173] Found /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 16:51:55.661121    9468 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 16:51:55.661176    9468 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/flannel-244000/config.json ...
	I0327 16:51:55.661187    9468 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/flannel-244000/config.json: {Name:mk0d76835bdea883b2e3492f1f66e262a6eeeccf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 16:51:55.661452    9468 start.go:360] acquireMachinesLock for flannel-244000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:51:55.661479    9468 start.go:364] duration metric: took 22.583µs to acquireMachinesLock for "flannel-244000"
	I0327 16:51:55.661491    9468 start.go:93] Provisioning new machine with config: &{Name:flannel-244000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:flannel-244000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 16:51:55.661517    9468 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 16:51:55.668986    9468 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 16:51:55.683603    9468 start.go:159] libmachine.API.Create for "flannel-244000" (driver="qemu2")
	I0327 16:51:55.683629    9468 client.go:168] LocalClient.Create starting
	I0327 16:51:55.683716    9468 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem
	I0327 16:51:55.683745    9468 main.go:141] libmachine: Decoding PEM data...
	I0327 16:51:55.683756    9468 main.go:141] libmachine: Parsing certificate...
	I0327 16:51:55.683802    9468 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem
	I0327 16:51:55.683822    9468 main.go:141] libmachine: Decoding PEM data...
	I0327 16:51:55.683830    9468 main.go:141] libmachine: Parsing certificate...
	I0327 16:51:55.684185    9468 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18485-6511/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0327 16:51:55.824800    9468 main.go:141] libmachine: Creating SSH key...
	I0327 16:51:56.079822    9468 main.go:141] libmachine: Creating Disk image...
	I0327 16:51:56.079835    9468 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 16:51:56.080027    9468 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/flannel-244000/disk.qcow2.raw /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/flannel-244000/disk.qcow2
	I0327 16:51:56.093083    9468 main.go:141] libmachine: STDOUT: 
	I0327 16:51:56.093108    9468 main.go:141] libmachine: STDERR: 
	I0327 16:51:56.093166    9468 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/flannel-244000/disk.qcow2 +20000M
	I0327 16:51:56.104155    9468 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 16:51:56.104172    9468 main.go:141] libmachine: STDERR: 
	I0327 16:51:56.104193    9468 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/flannel-244000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/flannel-244000/disk.qcow2
	I0327 16:51:56.104196    9468 main.go:141] libmachine: Starting QEMU VM...
	I0327 16:51:56.104226    9468 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/flannel-244000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/flannel-244000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/flannel-244000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:68:fd:db:b2:5a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/flannel-244000/disk.qcow2
	I0327 16:51:56.106023    9468 main.go:141] libmachine: STDOUT: 
	I0327 16:51:56.106039    9468 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:51:56.106057    9468 client.go:171] duration metric: took 422.436916ms to LocalClient.Create
	I0327 16:51:58.108119    9468 start.go:128] duration metric: took 2.446669625s to createHost
	I0327 16:51:58.108166    9468 start.go:83] releasing machines lock for "flannel-244000", held for 2.44676075s
	W0327 16:51:58.108209    9468 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:51:58.117532    9468 out.go:177] * Deleting "flannel-244000" in qemu2 ...
	W0327 16:51:58.135124    9468 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:51:58.135133    9468 start.go:728] Will try again in 5 seconds ...
	I0327 16:52:03.137270    9468 start.go:360] acquireMachinesLock for flannel-244000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:52:03.137791    9468 start.go:364] duration metric: took 399.042µs to acquireMachinesLock for "flannel-244000"
	I0327 16:52:03.137950    9468 start.go:93] Provisioning new machine with config: &{Name:flannel-244000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:flannel-244000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 16:52:03.138256    9468 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 16:52:03.149042    9468 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 16:52:03.195059    9468 start.go:159] libmachine.API.Create for "flannel-244000" (driver="qemu2")
	I0327 16:52:03.195111    9468 client.go:168] LocalClient.Create starting
	I0327 16:52:03.195228    9468 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem
	I0327 16:52:03.195295    9468 main.go:141] libmachine: Decoding PEM data...
	I0327 16:52:03.195315    9468 main.go:141] libmachine: Parsing certificate...
	I0327 16:52:03.195396    9468 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem
	I0327 16:52:03.195437    9468 main.go:141] libmachine: Decoding PEM data...
	I0327 16:52:03.195449    9468 main.go:141] libmachine: Parsing certificate...
	I0327 16:52:03.196053    9468 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18485-6511/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0327 16:52:03.346459    9468 main.go:141] libmachine: Creating SSH key...
	I0327 16:52:03.386593    9468 main.go:141] libmachine: Creating Disk image...
	I0327 16:52:03.386598    9468 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 16:52:03.386761    9468 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/flannel-244000/disk.qcow2.raw /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/flannel-244000/disk.qcow2
	I0327 16:52:03.399472    9468 main.go:141] libmachine: STDOUT: 
	I0327 16:52:03.399497    9468 main.go:141] libmachine: STDERR: 
	I0327 16:52:03.399562    9468 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/flannel-244000/disk.qcow2 +20000M
	I0327 16:52:03.410452    9468 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 16:52:03.410470    9468 main.go:141] libmachine: STDERR: 
	I0327 16:52:03.410483    9468 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/flannel-244000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/flannel-244000/disk.qcow2
	I0327 16:52:03.410495    9468 main.go:141] libmachine: Starting QEMU VM...
	I0327 16:52:03.410526    9468 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/flannel-244000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/flannel-244000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/flannel-244000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:b8:44:e9:05:66 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/flannel-244000/disk.qcow2
	I0327 16:52:03.412316    9468 main.go:141] libmachine: STDOUT: 
	I0327 16:52:03.412332    9468 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:52:03.412345    9468 client.go:171] duration metric: took 217.234208ms to LocalClient.Create
	I0327 16:52:05.414491    9468 start.go:128] duration metric: took 2.27625625s to createHost
	I0327 16:52:05.414564    9468 start.go:83] releasing machines lock for "flannel-244000", held for 2.276822834s
	W0327 16:52:05.414896    9468 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-244000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-244000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:52:05.423636    9468 out.go:177] 
	W0327 16:52:05.427702    9468 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 16:52:05.427743    9468 out.go:239] * 
	* 
	W0327 16:52:05.429030    9468 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 16:52:05.441563    9468 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-244000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-244000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.773745958s)

                                                
                                                
-- stdout --
	* [enable-default-cni-244000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-244000" primary control-plane node in "enable-default-cni-244000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-244000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:52:07.965153    9586 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:52:07.965282    9586 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:52:07.965285    9586 out.go:304] Setting ErrFile to fd 2...
	I0327 16:52:07.965288    9586 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:52:07.965407    9586 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:52:07.966476    9586 out.go:298] Setting JSON to false
	I0327 16:52:07.983184    9586 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6698,"bootTime":1711576829,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 16:52:07.983287    9586 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 16:52:07.988240    9586 out.go:177] * [enable-default-cni-244000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 16:52:07.994283    9586 out.go:177]   - MINIKUBE_LOCATION=18485
	I0327 16:52:07.998166    9586 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:52:07.994355    9586 notify.go:220] Checking for updates...
	I0327 16:52:08.004241    9586 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 16:52:08.007210    9586 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 16:52:08.010182    9586 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	I0327 16:52:08.013204    9586 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 16:52:08.016485    9586 config.go:182] Loaded profile config "multinode-266000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:52:08.016570    9586 config.go:182] Loaded profile config "stopped-upgrade-017000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 16:52:08.016618    9586 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 16:52:08.021195    9586 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 16:52:08.027170    9586 start.go:297] selected driver: qemu2
	I0327 16:52:08.027176    9586 start.go:901] validating driver "qemu2" against <nil>
	I0327 16:52:08.027182    9586 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 16:52:08.029454    9586 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 16:52:08.033208    9586 out.go:177] * Automatically selected the socket_vmnet network
	E0327 16:52:08.036281    9586 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0327 16:52:08.036293    9586 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 16:52:08.036336    9586 cni.go:84] Creating CNI manager for "bridge"
	I0327 16:52:08.036340    9586 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 16:52:08.036385    9586 start.go:340] cluster config:
	{Name:enable-default-cni-244000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:enable-default-cni-244000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:52:08.040837    9586 iso.go:125] acquiring lock: {Name:mk26a7f41e004242919e6d5076ec0f9645f820ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:52:08.048195    9586 out.go:177] * Starting "enable-default-cni-244000" primary control-plane node in "enable-default-cni-244000" cluster
	I0327 16:52:08.052218    9586 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 16:52:08.052234    9586 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 16:52:08.052240    9586 cache.go:56] Caching tarball of preloaded images
	I0327 16:52:08.052309    9586 preload.go:173] Found /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 16:52:08.052316    9586 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 16:52:08.052375    9586 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/enable-default-cni-244000/config.json ...
	I0327 16:52:08.052387    9586 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/enable-default-cni-244000/config.json: {Name:mk6d818901dde5a69c8f32223f3ed3c79921abdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 16:52:08.052689    9586 start.go:360] acquireMachinesLock for enable-default-cni-244000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:52:08.052725    9586 start.go:364] duration metric: took 28.333µs to acquireMachinesLock for "enable-default-cni-244000"
	I0327 16:52:08.052738    9586 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-244000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.29.3 ClusterName:enable-default-cni-244000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 16:52:08.052768    9586 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 16:52:08.061207    9586 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 16:52:08.076889    9586 start.go:159] libmachine.API.Create for "enable-default-cni-244000" (driver="qemu2")
	I0327 16:52:08.076919    9586 client.go:168] LocalClient.Create starting
	I0327 16:52:08.076970    9586 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem
	I0327 16:52:08.077002    9586 main.go:141] libmachine: Decoding PEM data...
	I0327 16:52:08.077012    9586 main.go:141] libmachine: Parsing certificate...
	I0327 16:52:08.077059    9586 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem
	I0327 16:52:08.077080    9586 main.go:141] libmachine: Decoding PEM data...
	I0327 16:52:08.077089    9586 main.go:141] libmachine: Parsing certificate...
	I0327 16:52:08.077488    9586 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18485-6511/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0327 16:52:08.215446    9586 main.go:141] libmachine: Creating SSH key...
	I0327 16:52:08.313031    9586 main.go:141] libmachine: Creating Disk image...
	I0327 16:52:08.313039    9586 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 16:52:08.313210    9586 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/enable-default-cni-244000/disk.qcow2.raw /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/enable-default-cni-244000/disk.qcow2
	I0327 16:52:08.325138    9586 main.go:141] libmachine: STDOUT: 
	I0327 16:52:08.325168    9586 main.go:141] libmachine: STDERR: 
	I0327 16:52:08.325218    9586 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/enable-default-cni-244000/disk.qcow2 +20000M
	I0327 16:52:08.336438    9586 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 16:52:08.336459    9586 main.go:141] libmachine: STDERR: 
	I0327 16:52:08.336481    9586 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/enable-default-cni-244000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/enable-default-cni-244000/disk.qcow2
	I0327 16:52:08.336485    9586 main.go:141] libmachine: Starting QEMU VM...
	I0327 16:52:08.336515    9586 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/enable-default-cni-244000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/enable-default-cni-244000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/enable-default-cni-244000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:c2:5d:84:61:65 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/enable-default-cni-244000/disk.qcow2
	I0327 16:52:08.338348    9586 main.go:141] libmachine: STDOUT: 
	I0327 16:52:08.338364    9586 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:52:08.338380    9586 client.go:171] duration metric: took 261.464334ms to LocalClient.Create
	I0327 16:52:10.340603    9586 start.go:128] duration metric: took 2.28786325s to createHost
	I0327 16:52:10.340696    9586 start.go:83] releasing machines lock for "enable-default-cni-244000", held for 2.288035583s
	W0327 16:52:10.340815    9586 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:52:10.350958    9586 out.go:177] * Deleting "enable-default-cni-244000" in qemu2 ...
	W0327 16:52:10.377450    9586 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:52:10.377481    9586 start.go:728] Will try again in 5 seconds ...
	I0327 16:52:15.379539    9586 start.go:360] acquireMachinesLock for enable-default-cni-244000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:52:15.380162    9586 start.go:364] duration metric: took 517.5µs to acquireMachinesLock for "enable-default-cni-244000"
	I0327 16:52:15.380368    9586 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-244000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.29.3 ClusterName:enable-default-cni-244000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 16:52:15.380696    9586 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 16:52:15.390193    9586 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 16:52:15.440684    9586 start.go:159] libmachine.API.Create for "enable-default-cni-244000" (driver="qemu2")
	I0327 16:52:15.440735    9586 client.go:168] LocalClient.Create starting
	I0327 16:52:15.440848    9586 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem
	I0327 16:52:15.440936    9586 main.go:141] libmachine: Decoding PEM data...
	I0327 16:52:15.440956    9586 main.go:141] libmachine: Parsing certificate...
	I0327 16:52:15.441023    9586 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem
	I0327 16:52:15.441064    9586 main.go:141] libmachine: Decoding PEM data...
	I0327 16:52:15.441082    9586 main.go:141] libmachine: Parsing certificate...
	I0327 16:52:15.441608    9586 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18485-6511/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0327 16:52:15.590190    9586 main.go:141] libmachine: Creating SSH key...
	I0327 16:52:15.643005    9586 main.go:141] libmachine: Creating Disk image...
	I0327 16:52:15.643018    9586 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 16:52:15.643234    9586 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/enable-default-cni-244000/disk.qcow2.raw /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/enable-default-cni-244000/disk.qcow2
	I0327 16:52:15.655777    9586 main.go:141] libmachine: STDOUT: 
	I0327 16:52:15.655797    9586 main.go:141] libmachine: STDERR: 
	I0327 16:52:15.655848    9586 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/enable-default-cni-244000/disk.qcow2 +20000M
	I0327 16:52:15.667108    9586 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 16:52:15.667132    9586 main.go:141] libmachine: STDERR: 
	I0327 16:52:15.667148    9586 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/enable-default-cni-244000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/enable-default-cni-244000/disk.qcow2
	I0327 16:52:15.667156    9586 main.go:141] libmachine: Starting QEMU VM...
	I0327 16:52:15.667188    9586 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/enable-default-cni-244000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/enable-default-cni-244000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/enable-default-cni-244000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:b7:ce:cc:34:b8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/enable-default-cni-244000/disk.qcow2
	I0327 16:52:15.668955    9586 main.go:141] libmachine: STDOUT: 
	I0327 16:52:15.668970    9586 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:52:15.668989    9586 client.go:171] duration metric: took 228.256625ms to LocalClient.Create
	I0327 16:52:17.671024    9586 start.go:128] duration metric: took 2.290349292s to createHost
	I0327 16:52:17.671059    9586 start.go:83] releasing machines lock for "enable-default-cni-244000", held for 2.290925333s
	W0327 16:52:17.671251    9586 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-244000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-244000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:52:17.681652    9586 out.go:177] 
	W0327 16:52:17.685575    9586 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 16:52:17.685583    9586 out.go:239] * 
	* 
	W0327 16:52:17.686470    9586 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 16:52:17.694458    9586 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-244000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-244000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.790807542s)

                                                
                                                
-- stdout --
	* [bridge-244000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-244000" primary control-plane node in "bridge-244000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-244000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:52:19.997816    9696 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:52:19.997946    9696 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:52:19.997949    9696 out.go:304] Setting ErrFile to fd 2...
	I0327 16:52:19.997951    9696 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:52:19.998066    9696 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:52:19.999090    9696 out.go:298] Setting JSON to false
	I0327 16:52:20.015346    9696 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6711,"bootTime":1711576829,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 16:52:20.015413    9696 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 16:52:20.021190    9696 out.go:177] * [bridge-244000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 16:52:20.027318    9696 out.go:177]   - MINIKUBE_LOCATION=18485
	I0327 16:52:20.027358    9696 notify.go:220] Checking for updates...
	I0327 16:52:20.032237    9696 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:52:20.035289    9696 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 16:52:20.038341    9696 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 16:52:20.041225    9696 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	I0327 16:52:20.044284    9696 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 16:52:20.047679    9696 config.go:182] Loaded profile config "multinode-266000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:52:20.047753    9696 config.go:182] Loaded profile config "stopped-upgrade-017000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 16:52:20.047795    9696 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 16:52:20.052272    9696 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 16:52:20.058274    9696 start.go:297] selected driver: qemu2
	I0327 16:52:20.058282    9696 start.go:901] validating driver "qemu2" against <nil>
	I0327 16:52:20.058288    9696 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 16:52:20.060610    9696 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 16:52:20.063165    9696 out.go:177] * Automatically selected the socket_vmnet network
	I0327 16:52:20.066344    9696 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 16:52:20.066377    9696 cni.go:84] Creating CNI manager for "bridge"
	I0327 16:52:20.066383    9696 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 16:52:20.066418    9696 start.go:340] cluster config:
	{Name:bridge-244000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:bridge-244000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:52:20.070658    9696 iso.go:125] acquiring lock: {Name:mk26a7f41e004242919e6d5076ec0f9645f820ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:52:20.078269    9696 out.go:177] * Starting "bridge-244000" primary control-plane node in "bridge-244000" cluster
	I0327 16:52:20.082268    9696 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 16:52:20.082281    9696 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 16:52:20.082290    9696 cache.go:56] Caching tarball of preloaded images
	I0327 16:52:20.082338    9696 preload.go:173] Found /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 16:52:20.082343    9696 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 16:52:20.082394    9696 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/bridge-244000/config.json ...
	I0327 16:52:20.082405    9696 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/bridge-244000/config.json: {Name:mk5bcfc0130c96f3d7e1b04a765690e36e3263fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 16:52:20.082681    9696 start.go:360] acquireMachinesLock for bridge-244000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:52:20.082710    9696 start.go:364] duration metric: took 23.084µs to acquireMachinesLock for "bridge-244000"
	I0327 16:52:20.082721    9696 start.go:93] Provisioning new machine with config: &{Name:bridge-244000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:bridge-244000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 16:52:20.082744    9696 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 16:52:20.086283    9696 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 16:52:20.101020    9696 start.go:159] libmachine.API.Create for "bridge-244000" (driver="qemu2")
	I0327 16:52:20.101046    9696 client.go:168] LocalClient.Create starting
	I0327 16:52:20.101105    9696 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem
	I0327 16:52:20.101138    9696 main.go:141] libmachine: Decoding PEM data...
	I0327 16:52:20.101148    9696 main.go:141] libmachine: Parsing certificate...
	I0327 16:52:20.101191    9696 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem
	I0327 16:52:20.101212    9696 main.go:141] libmachine: Decoding PEM data...
	I0327 16:52:20.101220    9696 main.go:141] libmachine: Parsing certificate...
	I0327 16:52:20.101631    9696 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18485-6511/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0327 16:52:20.239373    9696 main.go:141] libmachine: Creating SSH key...
	I0327 16:52:20.280622    9696 main.go:141] libmachine: Creating Disk image...
	I0327 16:52:20.280630    9696 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 16:52:20.280806    9696 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/bridge-244000/disk.qcow2.raw /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/bridge-244000/disk.qcow2
	I0327 16:52:20.293453    9696 main.go:141] libmachine: STDOUT: 
	I0327 16:52:20.293480    9696 main.go:141] libmachine: STDERR: 
	I0327 16:52:20.293537    9696 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/bridge-244000/disk.qcow2 +20000M
	I0327 16:52:20.304420    9696 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 16:52:20.304438    9696 main.go:141] libmachine: STDERR: 
	I0327 16:52:20.304461    9696 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/bridge-244000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/bridge-244000/disk.qcow2
	I0327 16:52:20.304466    9696 main.go:141] libmachine: Starting QEMU VM...
	I0327 16:52:20.304496    9696 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/bridge-244000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/bridge-244000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/bridge-244000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:df:02:77:5d:75 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/bridge-244000/disk.qcow2
	I0327 16:52:20.306226    9696 main.go:141] libmachine: STDOUT: 
	I0327 16:52:20.306239    9696 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:52:20.306257    9696 client.go:171] duration metric: took 205.2125ms to LocalClient.Create
	I0327 16:52:22.308317    9696 start.go:128] duration metric: took 2.22563875s to createHost
	I0327 16:52:22.308336    9696 start.go:83] releasing machines lock for "bridge-244000", held for 2.2256945s
	W0327 16:52:22.308362    9696 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:52:22.320829    9696 out.go:177] * Deleting "bridge-244000" in qemu2 ...
	W0327 16:52:22.329804    9696 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:52:22.329812    9696 start.go:728] Will try again in 5 seconds ...
	I0327 16:52:27.330692    9696 start.go:360] acquireMachinesLock for bridge-244000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:52:27.330975    9696 start.go:364] duration metric: took 228µs to acquireMachinesLock for "bridge-244000"
	I0327 16:52:27.331052    9696 start.go:93] Provisioning new machine with config: &{Name:bridge-244000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:bridge-244000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 16:52:27.331171    9696 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 16:52:27.336589    9696 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 16:52:27.370746    9696 start.go:159] libmachine.API.Create for "bridge-244000" (driver="qemu2")
	I0327 16:52:27.370857    9696 client.go:168] LocalClient.Create starting
	I0327 16:52:27.370956    9696 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem
	I0327 16:52:27.371008    9696 main.go:141] libmachine: Decoding PEM data...
	I0327 16:52:27.371023    9696 main.go:141] libmachine: Parsing certificate...
	I0327 16:52:27.371076    9696 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem
	I0327 16:52:27.371124    9696 main.go:141] libmachine: Decoding PEM data...
	I0327 16:52:27.371133    9696 main.go:141] libmachine: Parsing certificate...
	I0327 16:52:27.371876    9696 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18485-6511/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0327 16:52:27.518079    9696 main.go:141] libmachine: Creating SSH key...
	I0327 16:52:27.686571    9696 main.go:141] libmachine: Creating Disk image...
	I0327 16:52:27.686580    9696 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 16:52:27.687211    9696 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/bridge-244000/disk.qcow2.raw /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/bridge-244000/disk.qcow2
	I0327 16:52:27.702567    9696 main.go:141] libmachine: STDOUT: 
	I0327 16:52:27.702593    9696 main.go:141] libmachine: STDERR: 
	I0327 16:52:27.702648    9696 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/bridge-244000/disk.qcow2 +20000M
	I0327 16:52:27.713468    9696 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 16:52:27.713491    9696 main.go:141] libmachine: STDERR: 
	I0327 16:52:27.713503    9696 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/bridge-244000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/bridge-244000/disk.qcow2
	I0327 16:52:27.713508    9696 main.go:141] libmachine: Starting QEMU VM...
	I0327 16:52:27.713534    9696 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/bridge-244000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/bridge-244000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/bridge-244000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:8d:54:fa:d0:37 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/bridge-244000/disk.qcow2
	I0327 16:52:27.715298    9696 main.go:141] libmachine: STDOUT: 
	I0327 16:52:27.715312    9696 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:52:27.715325    9696 client.go:171] duration metric: took 344.473ms to LocalClient.Create
	I0327 16:52:29.717479    9696 start.go:128] duration metric: took 2.386350541s to createHost
	I0327 16:52:29.717550    9696 start.go:83] releasing machines lock for "bridge-244000", held for 2.3866355s
	W0327 16:52:29.717898    9696 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-244000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-244000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:52:29.727628    9696 out.go:177] 
	W0327 16:52:29.730693    9696 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 16:52:29.730725    9696 out.go:239] * 
	* 
	W0327 16:52:29.733333    9696 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 16:52:29.746609    9696 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-244000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-244000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.771102667s)

                                                
                                                
-- stdout --
	* [kubenet-244000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-244000" primary control-plane node in "kubenet-244000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-244000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:52:31.985101    9811 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:52:31.985252    9811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:52:31.985255    9811 out.go:304] Setting ErrFile to fd 2...
	I0327 16:52:31.985257    9811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:52:31.985386    9811 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:52:31.986486    9811 out.go:298] Setting JSON to false
	I0327 16:52:32.002903    9811 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6722,"bootTime":1711576829,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 16:52:32.002978    9811 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 16:52:32.008914    9811 out.go:177] * [kubenet-244000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 16:52:32.015791    9811 out.go:177]   - MINIKUBE_LOCATION=18485
	I0327 16:52:32.019790    9811 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:52:32.015824    9811 notify.go:220] Checking for updates...
	I0327 16:52:32.024776    9811 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 16:52:32.027846    9811 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 16:52:32.030844    9811 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	I0327 16:52:32.033771    9811 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 16:52:32.037164    9811 config.go:182] Loaded profile config "multinode-266000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:52:32.037232    9811 config.go:182] Loaded profile config "stopped-upgrade-017000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 16:52:32.037282    9811 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 16:52:32.041828    9811 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 16:52:32.048792    9811 start.go:297] selected driver: qemu2
	I0327 16:52:32.048798    9811 start.go:901] validating driver "qemu2" against <nil>
	I0327 16:52:32.048811    9811 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 16:52:32.051136    9811 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 16:52:32.054800    9811 out.go:177] * Automatically selected the socket_vmnet network
	I0327 16:52:32.057836    9811 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 16:52:32.057895    9811 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0327 16:52:32.057940    9811 start.go:340] cluster config:
	{Name:kubenet-244000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:kubenet-244000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:52:32.062715    9811 iso.go:125] acquiring lock: {Name:mk26a7f41e004242919e6d5076ec0f9645f820ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:52:32.069608    9811 out.go:177] * Starting "kubenet-244000" primary control-plane node in "kubenet-244000" cluster
	I0327 16:52:32.073751    9811 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 16:52:32.073765    9811 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 16:52:32.073774    9811 cache.go:56] Caching tarball of preloaded images
	I0327 16:52:32.073833    9811 preload.go:173] Found /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 16:52:32.073839    9811 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 16:52:32.073901    9811 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/kubenet-244000/config.json ...
	I0327 16:52:32.073912    9811 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/kubenet-244000/config.json: {Name:mkd9689ff7f8181fd211f72b3bdd21c3cce6ec13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 16:52:32.074129    9811 start.go:360] acquireMachinesLock for kubenet-244000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:52:32.074160    9811 start.go:364] duration metric: took 24.625µs to acquireMachinesLock for "kubenet-244000"
	I0327 16:52:32.074172    9811 start.go:93] Provisioning new machine with config: &{Name:kubenet-244000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:kubenet-244000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 16:52:32.074198    9811 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 16:52:32.081802    9811 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 16:52:32.098925    9811 start.go:159] libmachine.API.Create for "kubenet-244000" (driver="qemu2")
	I0327 16:52:32.098954    9811 client.go:168] LocalClient.Create starting
	I0327 16:52:32.099035    9811 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem
	I0327 16:52:32.099067    9811 main.go:141] libmachine: Decoding PEM data...
	I0327 16:52:32.099076    9811 main.go:141] libmachine: Parsing certificate...
	I0327 16:52:32.099128    9811 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem
	I0327 16:52:32.099150    9811 main.go:141] libmachine: Decoding PEM data...
	I0327 16:52:32.099159    9811 main.go:141] libmachine: Parsing certificate...
	I0327 16:52:32.099546    9811 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18485-6511/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0327 16:52:32.239699    9811 main.go:141] libmachine: Creating SSH key...
	I0327 16:52:32.330228    9811 main.go:141] libmachine: Creating Disk image...
	I0327 16:52:32.330236    9811 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 16:52:32.330415    9811 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kubenet-244000/disk.qcow2.raw /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kubenet-244000/disk.qcow2
	I0327 16:52:32.343454    9811 main.go:141] libmachine: STDOUT: 
	I0327 16:52:32.343481    9811 main.go:141] libmachine: STDERR: 
	I0327 16:52:32.343539    9811 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kubenet-244000/disk.qcow2 +20000M
	I0327 16:52:32.354621    9811 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 16:52:32.354643    9811 main.go:141] libmachine: STDERR: 
	I0327 16:52:32.354661    9811 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kubenet-244000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kubenet-244000/disk.qcow2
	I0327 16:52:32.354666    9811 main.go:141] libmachine: Starting QEMU VM...
	I0327 16:52:32.354695    9811 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kubenet-244000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kubenet-244000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kubenet-244000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:22:d1:68:1f:66 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kubenet-244000/disk.qcow2
	I0327 16:52:32.356532    9811 main.go:141] libmachine: STDOUT: 
	I0327 16:52:32.356547    9811 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:52:32.356565    9811 client.go:171] duration metric: took 257.615042ms to LocalClient.Create
	I0327 16:52:34.358768    9811 start.go:128] duration metric: took 2.284615375s to createHost
	I0327 16:52:34.358845    9811 start.go:83] releasing machines lock for "kubenet-244000", held for 2.284752s
	W0327 16:52:34.358945    9811 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:52:34.367681    9811 out.go:177] * Deleting "kubenet-244000" in qemu2 ...
	W0327 16:52:34.390818    9811 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:52:34.390856    9811 start.go:728] Will try again in 5 seconds ...
	I0327 16:52:39.392907    9811 start.go:360] acquireMachinesLock for kubenet-244000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:52:39.393483    9811 start.go:364] duration metric: took 405.958µs to acquireMachinesLock for "kubenet-244000"
	I0327 16:52:39.393711    9811 start.go:93] Provisioning new machine with config: &{Name:kubenet-244000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:kubenet-244000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 16:52:39.394005    9811 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 16:52:39.399817    9811 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 16:52:39.443932    9811 start.go:159] libmachine.API.Create for "kubenet-244000" (driver="qemu2")
	I0327 16:52:39.443980    9811 client.go:168] LocalClient.Create starting
	I0327 16:52:39.444130    9811 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem
	I0327 16:52:39.444205    9811 main.go:141] libmachine: Decoding PEM data...
	I0327 16:52:39.444221    9811 main.go:141] libmachine: Parsing certificate...
	I0327 16:52:39.444285    9811 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem
	I0327 16:52:39.444326    9811 main.go:141] libmachine: Decoding PEM data...
	I0327 16:52:39.444339    9811 main.go:141] libmachine: Parsing certificate...
	I0327 16:52:39.444902    9811 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18485-6511/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0327 16:52:39.596733    9811 main.go:141] libmachine: Creating SSH key...
	I0327 16:52:39.653380    9811 main.go:141] libmachine: Creating Disk image...
	I0327 16:52:39.653388    9811 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 16:52:39.653565    9811 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kubenet-244000/disk.qcow2.raw /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kubenet-244000/disk.qcow2
	I0327 16:52:39.666199    9811 main.go:141] libmachine: STDOUT: 
	I0327 16:52:39.666222    9811 main.go:141] libmachine: STDERR: 
	I0327 16:52:39.666300    9811 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kubenet-244000/disk.qcow2 +20000M
	I0327 16:52:39.677670    9811 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 16:52:39.677688    9811 main.go:141] libmachine: STDERR: 
	I0327 16:52:39.677702    9811 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kubenet-244000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kubenet-244000/disk.qcow2
	I0327 16:52:39.677706    9811 main.go:141] libmachine: Starting QEMU VM...
	I0327 16:52:39.677744    9811 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kubenet-244000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kubenet-244000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kubenet-244000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:14:53:d0:25:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/kubenet-244000/disk.qcow2
	I0327 16:52:39.679556    9811 main.go:141] libmachine: STDOUT: 
	I0327 16:52:39.679572    9811 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:52:39.679584    9811 client.go:171] duration metric: took 235.607542ms to LocalClient.Create
	I0327 16:52:41.681692    9811 start.go:128] duration metric: took 2.287709917s to createHost
	I0327 16:52:41.681843    9811 start.go:83] releasing machines lock for "kubenet-244000", held for 2.288279834s
	W0327 16:52:41.682129    9811 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-244000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-244000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:52:41.693694    9811 out.go:177] 
	W0327 16:52:41.697786    9811 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 16:52:41.697805    9811 out.go:239] * 
	* 
	W0327 16:52:41.699832    9811 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 16:52:41.715664    9811 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-244000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-244000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.727177625s)

                                                
                                                
-- stdout --
	* [custom-flannel-244000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-244000" primary control-plane node in "custom-flannel-244000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-244000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:52:43.960514    9926 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:52:43.960654    9926 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:52:43.960657    9926 out.go:304] Setting ErrFile to fd 2...
	I0327 16:52:43.960660    9926 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:52:43.960806    9926 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:52:43.961912    9926 out.go:298] Setting JSON to false
	I0327 16:52:43.978384    9926 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6734,"bootTime":1711576829,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 16:52:43.978460    9926 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 16:52:43.983369    9926 out.go:177] * [custom-flannel-244000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 16:52:43.992372    9926 out.go:177]   - MINIKUBE_LOCATION=18485
	I0327 16:52:43.992420    9926 notify.go:220] Checking for updates...
	I0327 16:52:43.995312    9926 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:52:43.998356    9926 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 16:52:44.001283    9926 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 16:52:44.004208    9926 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	I0327 16:52:44.007321    9926 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 16:52:44.010717    9926 config.go:182] Loaded profile config "multinode-266000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:52:44.010787    9926 config.go:182] Loaded profile config "stopped-upgrade-017000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 16:52:44.010836    9926 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 16:52:44.014331    9926 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 16:52:44.021294    9926 start.go:297] selected driver: qemu2
	I0327 16:52:44.021298    9926 start.go:901] validating driver "qemu2" against <nil>
	I0327 16:52:44.021303    9926 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 16:52:44.023445    9926 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 16:52:44.025064    9926 out.go:177] * Automatically selected the socket_vmnet network
	I0327 16:52:44.028402    9926 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 16:52:44.028449    9926 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0327 16:52:44.028457    9926 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0327 16:52:44.028489    9926 start.go:340] cluster config:
	{Name:custom-flannel-244000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:custom-flannel-244000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:52:44.032626    9926 iso.go:125] acquiring lock: {Name:mk26a7f41e004242919e6d5076ec0f9645f820ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:52:44.039216    9926 out.go:177] * Starting "custom-flannel-244000" primary control-plane node in "custom-flannel-244000" cluster
	I0327 16:52:44.043275    9926 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 16:52:44.043290    9926 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 16:52:44.043301    9926 cache.go:56] Caching tarball of preloaded images
	I0327 16:52:44.043361    9926 preload.go:173] Found /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 16:52:44.043368    9926 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 16:52:44.043446    9926 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/custom-flannel-244000/config.json ...
	I0327 16:52:44.043461    9926 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/custom-flannel-244000/config.json: {Name:mkd024c46ddb2f5f2ea9cc927b572649a3fc9ff7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 16:52:44.043849    9926 start.go:360] acquireMachinesLock for custom-flannel-244000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:52:44.043885    9926 start.go:364] duration metric: took 25.209µs to acquireMachinesLock for "custom-flannel-244000"
	I0327 16:52:44.043898    9926 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-244000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.29.3 ClusterName:custom-flannel-244000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 16:52:44.043924    9926 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 16:52:44.051319    9926 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 16:52:44.067474    9926 start.go:159] libmachine.API.Create for "custom-flannel-244000" (driver="qemu2")
	I0327 16:52:44.067498    9926 client.go:168] LocalClient.Create starting
	I0327 16:52:44.067546    9926 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem
	I0327 16:52:44.067578    9926 main.go:141] libmachine: Decoding PEM data...
	I0327 16:52:44.067585    9926 main.go:141] libmachine: Parsing certificate...
	I0327 16:52:44.067635    9926 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem
	I0327 16:52:44.067655    9926 main.go:141] libmachine: Decoding PEM data...
	I0327 16:52:44.067664    9926 main.go:141] libmachine: Parsing certificate...
	I0327 16:52:44.068004    9926 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18485-6511/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0327 16:52:44.206086    9926 main.go:141] libmachine: Creating SSH key...
	I0327 16:52:44.262644    9926 main.go:141] libmachine: Creating Disk image...
	I0327 16:52:44.262650    9926 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 16:52:44.262839    9926 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/custom-flannel-244000/disk.qcow2.raw /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/custom-flannel-244000/disk.qcow2
	I0327 16:52:44.275988    9926 main.go:141] libmachine: STDOUT: 
	I0327 16:52:44.276011    9926 main.go:141] libmachine: STDERR: 
	I0327 16:52:44.276069    9926 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/custom-flannel-244000/disk.qcow2 +20000M
	I0327 16:52:44.287799    9926 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 16:52:44.287815    9926 main.go:141] libmachine: STDERR: 
	I0327 16:52:44.287842    9926 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/custom-flannel-244000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/custom-flannel-244000/disk.qcow2
	I0327 16:52:44.287846    9926 main.go:141] libmachine: Starting QEMU VM...
	I0327 16:52:44.287876    9926 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/custom-flannel-244000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/custom-flannel-244000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/custom-flannel-244000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:bc:95:93:ce:68 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/custom-flannel-244000/disk.qcow2
	I0327 16:52:44.289735    9926 main.go:141] libmachine: STDOUT: 
	I0327 16:52:44.289750    9926 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:52:44.289770    9926 client.go:171] duration metric: took 222.274584ms to LocalClient.Create
	I0327 16:52:46.291909    9926 start.go:128] duration metric: took 2.248027916s to createHost
	I0327 16:52:46.291986    9926 start.go:83] releasing machines lock for "custom-flannel-244000", held for 2.248165792s
	W0327 16:52:46.292083    9926 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:52:46.303660    9926 out.go:177] * Deleting "custom-flannel-244000" in qemu2 ...
	W0327 16:52:46.323013    9926 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:52:46.323032    9926 start.go:728] Will try again in 5 seconds ...
	I0327 16:52:51.325153    9926 start.go:360] acquireMachinesLock for custom-flannel-244000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:52:51.325770    9926 start.go:364] duration metric: took 436.208µs to acquireMachinesLock for "custom-flannel-244000"
	I0327 16:52:51.325950    9926 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-244000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.29.3 ClusterName:custom-flannel-244000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 16:52:51.326233    9926 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 16:52:51.333830    9926 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 16:52:51.381338    9926 start.go:159] libmachine.API.Create for "custom-flannel-244000" (driver="qemu2")
	I0327 16:52:51.381405    9926 client.go:168] LocalClient.Create starting
	I0327 16:52:51.381528    9926 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem
	I0327 16:52:51.381590    9926 main.go:141] libmachine: Decoding PEM data...
	I0327 16:52:51.381606    9926 main.go:141] libmachine: Parsing certificate...
	I0327 16:52:51.381683    9926 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem
	I0327 16:52:51.381724    9926 main.go:141] libmachine: Decoding PEM data...
	I0327 16:52:51.381741    9926 main.go:141] libmachine: Parsing certificate...
	I0327 16:52:51.382305    9926 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18485-6511/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0327 16:52:51.533718    9926 main.go:141] libmachine: Creating SSH key...
	I0327 16:52:51.595409    9926 main.go:141] libmachine: Creating Disk image...
	I0327 16:52:51.595415    9926 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 16:52:51.595588    9926 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/custom-flannel-244000/disk.qcow2.raw /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/custom-flannel-244000/disk.qcow2
	I0327 16:52:51.608305    9926 main.go:141] libmachine: STDOUT: 
	I0327 16:52:51.608329    9926 main.go:141] libmachine: STDERR: 
	I0327 16:52:51.608397    9926 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/custom-flannel-244000/disk.qcow2 +20000M
	I0327 16:52:51.619114    9926 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 16:52:51.619133    9926 main.go:141] libmachine: STDERR: 
	I0327 16:52:51.619148    9926 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/custom-flannel-244000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/custom-flannel-244000/disk.qcow2
	I0327 16:52:51.619164    9926 main.go:141] libmachine: Starting QEMU VM...
	I0327 16:52:51.619195    9926 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/custom-flannel-244000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/custom-flannel-244000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/custom-flannel-244000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:08:96:1e:d0:d8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/custom-flannel-244000/disk.qcow2
	I0327 16:52:51.620964    9926 main.go:141] libmachine: STDOUT: 
	I0327 16:52:51.620980    9926 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:52:51.620992    9926 client.go:171] duration metric: took 239.589791ms to LocalClient.Create
	I0327 16:52:53.623104    9926 start.go:128] duration metric: took 2.296909917s to createHost
	I0327 16:52:53.623142    9926 start.go:83] releasing machines lock for "custom-flannel-244000", held for 2.297389541s
	W0327 16:52:53.623395    9926 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-244000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-244000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:52:53.631807    9926 out.go:177] 
	W0327 16:52:53.635926    9926 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 16:52:53.635971    9926 out.go:239] * 
	* 
	W0327 16:52:53.637508    9926 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 16:52:53.643826    9926 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-244000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-244000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.813607042s)

                                                
                                                
-- stdout --
	* [calico-244000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-244000" primary control-plane node in "calico-244000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-244000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:52:56.099436   10044 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:52:56.099577   10044 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:52:56.099581   10044 out.go:304] Setting ErrFile to fd 2...
	I0327 16:52:56.099582   10044 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:52:56.099714   10044 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:52:56.100833   10044 out.go:298] Setting JSON to false
	I0327 16:52:56.117020   10044 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6747,"bootTime":1711576829,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 16:52:56.117093   10044 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 16:52:56.122861   10044 out.go:177] * [calico-244000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 16:52:56.129775   10044 out.go:177]   - MINIKUBE_LOCATION=18485
	I0327 16:52:56.129855   10044 notify.go:220] Checking for updates...
	I0327 16:52:56.135827   10044 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:52:56.138820   10044 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 16:52:56.141874   10044 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 16:52:56.144972   10044 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	I0327 16:52:56.147895   10044 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 16:52:56.151207   10044 config.go:182] Loaded profile config "multinode-266000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:52:56.151270   10044 config.go:182] Loaded profile config "stopped-upgrade-017000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 16:52:56.151319   10044 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 16:52:56.155866   10044 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 16:52:56.162800   10044 start.go:297] selected driver: qemu2
	I0327 16:52:56.162805   10044 start.go:901] validating driver "qemu2" against <nil>
	I0327 16:52:56.162810   10044 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 16:52:56.164899   10044 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 16:52:56.167838   10044 out.go:177] * Automatically selected the socket_vmnet network
	I0327 16:52:56.170783   10044 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 16:52:56.170800   10044 cni.go:84] Creating CNI manager for "calico"
	I0327 16:52:56.170803   10044 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0327 16:52:56.170835   10044 start.go:340] cluster config:
	{Name:calico-244000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:calico-244000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:52:56.175089   10044 iso.go:125] acquiring lock: {Name:mk26a7f41e004242919e6d5076ec0f9645f820ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:52:56.181872   10044 out.go:177] * Starting "calico-244000" primary control-plane node in "calico-244000" cluster
	I0327 16:52:56.185836   10044 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 16:52:56.185848   10044 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 16:52:56.185857   10044 cache.go:56] Caching tarball of preloaded images
	I0327 16:52:56.185915   10044 preload.go:173] Found /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 16:52:56.185923   10044 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 16:52:56.185970   10044 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/calico-244000/config.json ...
	I0327 16:52:56.185979   10044 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/calico-244000/config.json: {Name:mk7ac6fcb5390f559f8e961d157afceb1f80aed5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 16:52:56.186179   10044 start.go:360] acquireMachinesLock for calico-244000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:52:56.186207   10044 start.go:364] duration metric: took 22.708µs to acquireMachinesLock for "calico-244000"
	I0327 16:52:56.186218   10044 start.go:93] Provisioning new machine with config: &{Name:calico-244000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:calico-244000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 16:52:56.186243   10044 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 16:52:56.194796   10044 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 16:52:56.209552   10044 start.go:159] libmachine.API.Create for "calico-244000" (driver="qemu2")
	I0327 16:52:56.209575   10044 client.go:168] LocalClient.Create starting
	I0327 16:52:56.209631   10044 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem
	I0327 16:52:56.209662   10044 main.go:141] libmachine: Decoding PEM data...
	I0327 16:52:56.209673   10044 main.go:141] libmachine: Parsing certificate...
	I0327 16:52:56.209717   10044 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem
	I0327 16:52:56.209742   10044 main.go:141] libmachine: Decoding PEM data...
	I0327 16:52:56.209749   10044 main.go:141] libmachine: Parsing certificate...
	I0327 16:52:56.210104   10044 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18485-6511/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0327 16:52:56.349643   10044 main.go:141] libmachine: Creating SSH key...
	I0327 16:52:56.489941   10044 main.go:141] libmachine: Creating Disk image...
	I0327 16:52:56.489949   10044 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 16:52:56.490123   10044 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/calico-244000/disk.qcow2.raw /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/calico-244000/disk.qcow2
	I0327 16:52:56.502739   10044 main.go:141] libmachine: STDOUT: 
	I0327 16:52:56.502772   10044 main.go:141] libmachine: STDERR: 
	I0327 16:52:56.502833   10044 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/calico-244000/disk.qcow2 +20000M
	I0327 16:52:56.513858   10044 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 16:52:56.513886   10044 main.go:141] libmachine: STDERR: 
	I0327 16:52:56.513902   10044 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/calico-244000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/calico-244000/disk.qcow2
	I0327 16:52:56.513908   10044 main.go:141] libmachine: Starting QEMU VM...
	I0327 16:52:56.513947   10044 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/calico-244000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/calico-244000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/calico-244000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:76:e8:2a:af:ed -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/calico-244000/disk.qcow2
	I0327 16:52:56.515751   10044 main.go:141] libmachine: STDOUT: 
	I0327 16:52:56.515771   10044 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:52:56.515790   10044 client.go:171] duration metric: took 306.220125ms to LocalClient.Create
	I0327 16:52:58.516401   10044 start.go:128] duration metric: took 2.330185833s to createHost
	I0327 16:52:58.516500   10044 start.go:83] releasing machines lock for "calico-244000", held for 2.330359292s
	W0327 16:52:58.516596   10044 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:52:58.528531   10044 out.go:177] * Deleting "calico-244000" in qemu2 ...
	W0327 16:52:58.552879   10044 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:52:58.552919   10044 start.go:728] Will try again in 5 seconds ...
	I0327 16:53:03.554881   10044 start.go:360] acquireMachinesLock for calico-244000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:53:03.555244   10044 start.go:364] duration metric: took 287.459µs to acquireMachinesLock for "calico-244000"
	I0327 16:53:03.555302   10044 start.go:93] Provisioning new machine with config: &{Name:calico-244000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:calico-244000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 16:53:03.555445   10044 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 16:53:03.565796   10044 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 16:53:03.602102   10044 start.go:159] libmachine.API.Create for "calico-244000" (driver="qemu2")
	I0327 16:53:03.602145   10044 client.go:168] LocalClient.Create starting
	I0327 16:53:03.602255   10044 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem
	I0327 16:53:03.602320   10044 main.go:141] libmachine: Decoding PEM data...
	I0327 16:53:03.602336   10044 main.go:141] libmachine: Parsing certificate...
	I0327 16:53:03.602397   10044 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem
	I0327 16:53:03.602434   10044 main.go:141] libmachine: Decoding PEM data...
	I0327 16:53:03.602443   10044 main.go:141] libmachine: Parsing certificate...
	I0327 16:53:03.602874   10044 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18485-6511/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0327 16:53:03.749623   10044 main.go:141] libmachine: Creating SSH key...
	I0327 16:53:03.808726   10044 main.go:141] libmachine: Creating Disk image...
	I0327 16:53:03.808735   10044 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 16:53:03.808923   10044 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/calico-244000/disk.qcow2.raw /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/calico-244000/disk.qcow2
	I0327 16:53:03.821523   10044 main.go:141] libmachine: STDOUT: 
	I0327 16:53:03.821546   10044 main.go:141] libmachine: STDERR: 
	I0327 16:53:03.821621   10044 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/calico-244000/disk.qcow2 +20000M
	I0327 16:53:03.832349   10044 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 16:53:03.832367   10044 main.go:141] libmachine: STDERR: 
	I0327 16:53:03.832383   10044 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/calico-244000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/calico-244000/disk.qcow2
	I0327 16:53:03.832390   10044 main.go:141] libmachine: Starting QEMU VM...
	I0327 16:53:03.832440   10044 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/calico-244000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/calico-244000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/calico-244000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:cb:6a:01:aa:94 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/calico-244000/disk.qcow2
	I0327 16:53:03.834245   10044 main.go:141] libmachine: STDOUT: 
	I0327 16:53:03.834262   10044 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:53:03.834280   10044 client.go:171] duration metric: took 232.138625ms to LocalClient.Create
	I0327 16:53:05.836428   10044 start.go:128] duration metric: took 2.281021959s to createHost
	I0327 16:53:05.836515   10044 start.go:83] releasing machines lock for "calico-244000", held for 2.281297166s
	W0327 16:53:05.837033   10044 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-244000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-244000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:53:05.850683   10044 out.go:177] 
	W0327 16:53:05.854724   10044 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 16:53:05.854752   10044 out.go:239] * 
	* 
	W0327 16:53:05.857557   10044 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 16:53:05.868683   10044 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (10.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-244000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-244000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (10.051948167s)

                                                
                                                
-- stdout --
	* [false-244000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-244000" primary control-plane node in "false-244000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-244000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:53:08.463641   10162 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:53:08.463778   10162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:53:08.463781   10162 out.go:304] Setting ErrFile to fd 2...
	I0327 16:53:08.463783   10162 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:53:08.463928   10162 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:53:08.465888   10162 out.go:298] Setting JSON to false
	I0327 16:53:08.484705   10162 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6759,"bootTime":1711576829,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 16:53:08.484794   10162 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 16:53:08.489528   10162 out.go:177] * [false-244000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 16:53:08.500441   10162 out.go:177]   - MINIKUBE_LOCATION=18485
	I0327 16:53:08.496574   10162 notify.go:220] Checking for updates...
	I0327 16:53:08.514519   10162 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:53:08.522475   10162 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 16:53:08.529435   10162 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 16:53:08.541466   10162 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	I0327 16:53:08.552464   10162 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 16:53:08.560106   10162 config.go:182] Loaded profile config "multinode-266000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:53:08.560178   10162 config.go:182] Loaded profile config "stopped-upgrade-017000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 16:53:08.560230   10162 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 16:53:08.566452   10162 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 16:53:08.573470   10162 start.go:297] selected driver: qemu2
	I0327 16:53:08.573477   10162 start.go:901] validating driver "qemu2" against <nil>
	I0327 16:53:08.573483   10162 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 16:53:08.575870   10162 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 16:53:08.578479   10162 out.go:177] * Automatically selected the socket_vmnet network
	I0327 16:53:08.581590   10162 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 16:53:08.581628   10162 cni.go:84] Creating CNI manager for "false"
	I0327 16:53:08.581666   10162 start.go:340] cluster config:
	{Name:false-244000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:false-244000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:53:08.585997   10162 iso.go:125] acquiring lock: {Name:mk26a7f41e004242919e6d5076ec0f9645f820ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:53:08.593461   10162 out.go:177] * Starting "false-244000" primary control-plane node in "false-244000" cluster
	I0327 16:53:08.597367   10162 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 16:53:08.597385   10162 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 16:53:08.597392   10162 cache.go:56] Caching tarball of preloaded images
	I0327 16:53:08.597459   10162 preload.go:173] Found /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 16:53:08.597465   10162 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 16:53:08.597532   10162 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/false-244000/config.json ...
	I0327 16:53:08.597543   10162 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/false-244000/config.json: {Name:mkf6923f082d0d0b9e45e7e8cdc4677f8a86f2ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 16:53:08.597844   10162 start.go:360] acquireMachinesLock for false-244000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:53:08.597872   10162 start.go:364] duration metric: took 23µs to acquireMachinesLock for "false-244000"
	I0327 16:53:08.597884   10162 start.go:93] Provisioning new machine with config: &{Name:false-244000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:false-244000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 16:53:08.597917   10162 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 16:53:08.606304   10162 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 16:53:08.622208   10162 start.go:159] libmachine.API.Create for "false-244000" (driver="qemu2")
	I0327 16:53:08.622241   10162 client.go:168] LocalClient.Create starting
	I0327 16:53:08.622313   10162 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem
	I0327 16:53:08.622343   10162 main.go:141] libmachine: Decoding PEM data...
	I0327 16:53:08.622356   10162 main.go:141] libmachine: Parsing certificate...
	I0327 16:53:08.622405   10162 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem
	I0327 16:53:08.622426   10162 main.go:141] libmachine: Decoding PEM data...
	I0327 16:53:08.622433   10162 main.go:141] libmachine: Parsing certificate...
	I0327 16:53:08.622859   10162 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18485-6511/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0327 16:53:08.867872   10162 main.go:141] libmachine: Creating SSH key...
	I0327 16:53:08.973782   10162 main.go:141] libmachine: Creating Disk image...
	I0327 16:53:08.973790   10162 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 16:53:08.973975   10162 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/false-244000/disk.qcow2.raw /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/false-244000/disk.qcow2
	I0327 16:53:08.998365   10162 main.go:141] libmachine: STDOUT: 
	I0327 16:53:08.998387   10162 main.go:141] libmachine: STDERR: 
	I0327 16:53:08.998458   10162 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/false-244000/disk.qcow2 +20000M
	I0327 16:53:09.011326   10162 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 16:53:09.011349   10162 main.go:141] libmachine: STDERR: 
	I0327 16:53:09.011378   10162 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/false-244000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/false-244000/disk.qcow2
	I0327 16:53:09.011383   10162 main.go:141] libmachine: Starting QEMU VM...
	I0327 16:53:09.011418   10162 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/false-244000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/false-244000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/false-244000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:52:88:f6:86:78 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/false-244000/disk.qcow2
	I0327 16:53:09.013552   10162 main.go:141] libmachine: STDOUT: 
	I0327 16:53:09.013570   10162 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:53:09.013590   10162 client.go:171] duration metric: took 391.356542ms to LocalClient.Create
	I0327 16:53:11.015774   10162 start.go:128] duration metric: took 2.417901417s to createHost
	I0327 16:53:11.015856   10162 start.go:83] releasing machines lock for "false-244000", held for 2.418053125s
	W0327 16:53:11.015974   10162 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:53:11.024240   10162 out.go:177] * Deleting "false-244000" in qemu2 ...
	W0327 16:53:11.056303   10162 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:53:11.056349   10162 start.go:728] Will try again in 5 seconds ...
	I0327 16:53:16.057143   10162 start.go:360] acquireMachinesLock for false-244000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:53:16.057723   10162 start.go:364] duration metric: took 456.5µs to acquireMachinesLock for "false-244000"
	I0327 16:53:16.057868   10162 start.go:93] Provisioning new machine with config: &{Name:false-244000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:false-244000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 16:53:16.058258   10162 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 16:53:16.067914   10162 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0327 16:53:16.118012   10162 start.go:159] libmachine.API.Create for "false-244000" (driver="qemu2")
	I0327 16:53:16.118065   10162 client.go:168] LocalClient.Create starting
	I0327 16:53:16.118165   10162 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem
	I0327 16:53:16.118227   10162 main.go:141] libmachine: Decoding PEM data...
	I0327 16:53:16.118245   10162 main.go:141] libmachine: Parsing certificate...
	I0327 16:53:16.118303   10162 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem
	I0327 16:53:16.118343   10162 main.go:141] libmachine: Decoding PEM data...
	I0327 16:53:16.118355   10162 main.go:141] libmachine: Parsing certificate...
	I0327 16:53:16.118949   10162 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18485-6511/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0327 16:53:16.276210   10162 main.go:141] libmachine: Creating SSH key...
	I0327 16:53:16.408251   10162 main.go:141] libmachine: Creating Disk image...
	I0327 16:53:16.408259   10162 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 16:53:16.408458   10162 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/false-244000/disk.qcow2.raw /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/false-244000/disk.qcow2
	I0327 16:53:16.421228   10162 main.go:141] libmachine: STDOUT: 
	I0327 16:53:16.421250   10162 main.go:141] libmachine: STDERR: 
	I0327 16:53:16.421304   10162 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/false-244000/disk.qcow2 +20000M
	I0327 16:53:16.432324   10162 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 16:53:16.432345   10162 main.go:141] libmachine: STDERR: 
	I0327 16:53:16.432359   10162 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/false-244000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/false-244000/disk.qcow2
	I0327 16:53:16.432364   10162 main.go:141] libmachine: Starting QEMU VM...
	I0327 16:53:16.432396   10162 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/false-244000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/false-244000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/false-244000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:9a:62:94:af:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/false-244000/disk.qcow2
	I0327 16:53:16.434253   10162 main.go:141] libmachine: STDOUT: 
	I0327 16:53:16.434270   10162 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:53:16.434283   10162 client.go:171] duration metric: took 316.222875ms to LocalClient.Create
	I0327 16:53:18.436436   10162 start.go:128] duration metric: took 2.378210792s to createHost
	I0327 16:53:18.436511   10162 start.go:83] releasing machines lock for "false-244000", held for 2.37884025s
	W0327 16:53:18.436947   10162 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-244000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-244000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:53:18.445497   10162 out.go:177] 
	W0327 16:53:18.452709   10162 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 16:53:18.452744   10162 out.go:239] * 
	* 
	W0327 16:53:18.455476   10162 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 16:53:18.464929   10162 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (10.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-386000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-386000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.783482792s)

                                                
                                                
-- stdout --
	* [old-k8s-version-386000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-386000" primary control-plane node in "old-k8s-version-386000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-386000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:53:20.757239   10278 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:53:20.757349   10278 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:53:20.757353   10278 out.go:304] Setting ErrFile to fd 2...
	I0327 16:53:20.757355   10278 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:53:20.757492   10278 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:53:20.758570   10278 out.go:298] Setting JSON to false
	I0327 16:53:20.775077   10278 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6771,"bootTime":1711576829,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 16:53:20.775147   10278 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 16:53:20.780357   10278 out.go:177] * [old-k8s-version-386000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 16:53:20.787353   10278 out.go:177]   - MINIKUBE_LOCATION=18485
	I0327 16:53:20.791299   10278 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:53:20.787437   10278 notify.go:220] Checking for updates...
	I0327 16:53:20.797307   10278 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 16:53:20.803279   10278 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 16:53:20.806322   10278 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	I0327 16:53:20.809342   10278 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 16:53:20.811218   10278 config.go:182] Loaded profile config "multinode-266000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:53:20.811300   10278 config.go:182] Loaded profile config "stopped-upgrade-017000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 16:53:20.811344   10278 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 16:53:20.815305   10278 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 16:53:20.822129   10278 start.go:297] selected driver: qemu2
	I0327 16:53:20.822135   10278 start.go:901] validating driver "qemu2" against <nil>
	I0327 16:53:20.822141   10278 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 16:53:20.824424   10278 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 16:53:20.828269   10278 out.go:177] * Automatically selected the socket_vmnet network
	I0327 16:53:20.831454   10278 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 16:53:20.831505   10278 cni.go:84] Creating CNI manager for ""
	I0327 16:53:20.831512   10278 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0327 16:53:20.831539   10278 start.go:340] cluster config:
	{Name:old-k8s-version-386000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-386000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:53:20.836335   10278 iso.go:125] acquiring lock: {Name:mk26a7f41e004242919e6d5076ec0f9645f820ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:53:20.843294   10278 out.go:177] * Starting "old-k8s-version-386000" primary control-plane node in "old-k8s-version-386000" cluster
	I0327 16:53:20.847241   10278 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0327 16:53:20.847254   10278 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0327 16:53:20.847262   10278 cache.go:56] Caching tarball of preloaded images
	I0327 16:53:20.847312   10278 preload.go:173] Found /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 16:53:20.847318   10278 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0327 16:53:20.847382   10278 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/old-k8s-version-386000/config.json ...
	I0327 16:53:20.847393   10278 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/old-k8s-version-386000/config.json: {Name:mke60b6422684fa5b88daa7b2cca3169a4dba36e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 16:53:20.847620   10278 start.go:360] acquireMachinesLock for old-k8s-version-386000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:53:20.847654   10278 start.go:364] duration metric: took 25.958µs to acquireMachinesLock for "old-k8s-version-386000"
	I0327 16:53:20.847667   10278 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-386000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-386000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 16:53:20.847699   10278 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 16:53:20.856282   10278 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 16:53:20.871961   10278 start.go:159] libmachine.API.Create for "old-k8s-version-386000" (driver="qemu2")
	I0327 16:53:20.871995   10278 client.go:168] LocalClient.Create starting
	I0327 16:53:20.872060   10278 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem
	I0327 16:53:20.872095   10278 main.go:141] libmachine: Decoding PEM data...
	I0327 16:53:20.872109   10278 main.go:141] libmachine: Parsing certificate...
	I0327 16:53:20.872153   10278 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem
	I0327 16:53:20.872178   10278 main.go:141] libmachine: Decoding PEM data...
	I0327 16:53:20.872183   10278 main.go:141] libmachine: Parsing certificate...
	I0327 16:53:20.872620   10278 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18485-6511/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0327 16:53:21.012800   10278 main.go:141] libmachine: Creating SSH key...
	I0327 16:53:21.095882   10278 main.go:141] libmachine: Creating Disk image...
	I0327 16:53:21.095889   10278 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 16:53:21.096061   10278 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/old-k8s-version-386000/disk.qcow2.raw /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/old-k8s-version-386000/disk.qcow2
	I0327 16:53:21.108562   10278 main.go:141] libmachine: STDOUT: 
	I0327 16:53:21.108580   10278 main.go:141] libmachine: STDERR: 
	I0327 16:53:21.108635   10278 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/old-k8s-version-386000/disk.qcow2 +20000M
	I0327 16:53:21.119530   10278 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 16:53:21.119558   10278 main.go:141] libmachine: STDERR: 
	I0327 16:53:21.119576   10278 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/old-k8s-version-386000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/old-k8s-version-386000/disk.qcow2
	I0327 16:53:21.119582   10278 main.go:141] libmachine: Starting QEMU VM...
	I0327 16:53:21.119608   10278 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/old-k8s-version-386000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/old-k8s-version-386000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/old-k8s-version-386000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:0f:b0:36:60:57 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/old-k8s-version-386000/disk.qcow2
	I0327 16:53:21.121392   10278 main.go:141] libmachine: STDOUT: 
	I0327 16:53:21.121406   10278 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:53:21.121424   10278 client.go:171] duration metric: took 249.431459ms to LocalClient.Create
	I0327 16:53:23.123627   10278 start.go:128] duration metric: took 2.275956958s to createHost
	I0327 16:53:23.123726   10278 start.go:83] releasing machines lock for "old-k8s-version-386000", held for 2.27613775s
	W0327 16:53:23.123822   10278 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:53:23.133671   10278 out.go:177] * Deleting "old-k8s-version-386000" in qemu2 ...
	W0327 16:53:23.155756   10278 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:53:23.155783   10278 start.go:728] Will try again in 5 seconds ...
	I0327 16:53:28.157794   10278 start.go:360] acquireMachinesLock for old-k8s-version-386000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:53:28.158099   10278 start.go:364] duration metric: took 252.959µs to acquireMachinesLock for "old-k8s-version-386000"
	I0327 16:53:28.158180   10278 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-386000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-386000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 16:53:28.158319   10278 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 16:53:28.161872   10278 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 16:53:28.194514   10278 start.go:159] libmachine.API.Create for "old-k8s-version-386000" (driver="qemu2")
	I0327 16:53:28.194557   10278 client.go:168] LocalClient.Create starting
	I0327 16:53:28.194647   10278 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem
	I0327 16:53:28.194696   10278 main.go:141] libmachine: Decoding PEM data...
	I0327 16:53:28.194708   10278 main.go:141] libmachine: Parsing certificate...
	I0327 16:53:28.194764   10278 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem
	I0327 16:53:28.194801   10278 main.go:141] libmachine: Decoding PEM data...
	I0327 16:53:28.194810   10278 main.go:141] libmachine: Parsing certificate...
	I0327 16:53:28.195398   10278 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18485-6511/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0327 16:53:28.339964   10278 main.go:141] libmachine: Creating SSH key...
	I0327 16:53:28.426506   10278 main.go:141] libmachine: Creating Disk image...
	I0327 16:53:28.426512   10278 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 16:53:28.426688   10278 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/old-k8s-version-386000/disk.qcow2.raw /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/old-k8s-version-386000/disk.qcow2
	I0327 16:53:28.439342   10278 main.go:141] libmachine: STDOUT: 
	I0327 16:53:28.439360   10278 main.go:141] libmachine: STDERR: 
	I0327 16:53:28.439415   10278 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/old-k8s-version-386000/disk.qcow2 +20000M
	I0327 16:53:28.450289   10278 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 16:53:28.450310   10278 main.go:141] libmachine: STDERR: 
	I0327 16:53:28.450329   10278 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/old-k8s-version-386000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/old-k8s-version-386000/disk.qcow2
	I0327 16:53:28.450334   10278 main.go:141] libmachine: Starting QEMU VM...
	I0327 16:53:28.450371   10278 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/old-k8s-version-386000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/old-k8s-version-386000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/old-k8s-version-386000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:dd:62:39:53:90 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/old-k8s-version-386000/disk.qcow2
	I0327 16:53:28.452101   10278 main.go:141] libmachine: STDOUT: 
	I0327 16:53:28.452119   10278 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:53:28.452133   10278 client.go:171] duration metric: took 257.579708ms to LocalClient.Create
	I0327 16:53:30.454296   10278 start.go:128] duration metric: took 2.29601225s to createHost
	I0327 16:53:30.454373   10278 start.go:83] releasing machines lock for "old-k8s-version-386000", held for 2.296330875s
	W0327 16:53:30.454766   10278 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-386000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-386000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:53:30.479270   10278 out.go:177] 
	W0327 16:53:30.483512   10278 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 16:53:30.483539   10278 out.go:239] * 
	* 
	W0327 16:53:30.486175   10278 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 16:53:30.495354   10278 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-386000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-386000 -n old-k8s-version-386000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-386000 -n old-k8s-version-386000: exit status 7 (66.724625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-386000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-386000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-386000 create -f testdata/busybox.yaml: exit status 1 (29.794375ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-386000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-386000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-386000 -n old-k8s-version-386000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-386000 -n old-k8s-version-386000: exit status 7 (31.909542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-386000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-386000 -n old-k8s-version-386000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-386000 -n old-k8s-version-386000: exit status 7 (31.312917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-386000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-386000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-386000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-386000 describe deploy/metrics-server -n kube-system: exit status 1 (27.225209ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-386000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-386000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-386000 -n old-k8s-version-386000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-386000 -n old-k8s-version-386000: exit status 7 (32.277208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-386000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-386000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-386000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.18017975s)

                                                
                                                
-- stdout --
	* [old-k8s-version-386000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-386000" primary control-plane node in "old-k8s-version-386000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-386000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-386000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:53:34.454203   10333 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:53:34.454332   10333 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:53:34.454336   10333 out.go:304] Setting ErrFile to fd 2...
	I0327 16:53:34.454338   10333 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:53:34.454450   10333 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:53:34.455500   10333 out.go:298] Setting JSON to false
	I0327 16:53:34.471766   10333 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6785,"bootTime":1711576829,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 16:53:34.471828   10333 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 16:53:34.475884   10333 out.go:177] * [old-k8s-version-386000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 16:53:34.479050   10333 out.go:177]   - MINIKUBE_LOCATION=18485
	I0327 16:53:34.482994   10333 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:53:34.479114   10333 notify.go:220] Checking for updates...
	I0327 16:53:34.489952   10333 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 16:53:34.493014   10333 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 16:53:34.495947   10333 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	I0327 16:53:34.499006   10333 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 16:53:34.502319   10333 config.go:182] Loaded profile config "old-k8s-version-386000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0327 16:53:34.504109   10333 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0327 16:53:34.506951   10333 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 16:53:34.510981   10333 out.go:177] * Using the qemu2 driver based on existing profile
	I0327 16:53:34.515950   10333 start.go:297] selected driver: qemu2
	I0327 16:53:34.515957   10333 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-386000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-386000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:53:34.516020   10333 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 16:53:34.518311   10333 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 16:53:34.518357   10333 cni.go:84] Creating CNI manager for ""
	I0327 16:53:34.518364   10333 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0327 16:53:34.518391   10333 start.go:340] cluster config:
	{Name:old-k8s-version-386000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-386000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:53:34.522689   10333 iso.go:125] acquiring lock: {Name:mk26a7f41e004242919e6d5076ec0f9645f820ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:53:34.529898   10333 out.go:177] * Starting "old-k8s-version-386000" primary control-plane node in "old-k8s-version-386000" cluster
	I0327 16:53:34.533968   10333 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0327 16:53:34.533984   10333 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0327 16:53:34.533989   10333 cache.go:56] Caching tarball of preloaded images
	I0327 16:53:34.534049   10333 preload.go:173] Found /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 16:53:34.534054   10333 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0327 16:53:34.534122   10333 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/old-k8s-version-386000/config.json ...
	I0327 16:53:34.534612   10333 start.go:360] acquireMachinesLock for old-k8s-version-386000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:53:34.534637   10333 start.go:364] duration metric: took 19.042µs to acquireMachinesLock for "old-k8s-version-386000"
	I0327 16:53:34.534645   10333 start.go:96] Skipping create...Using existing machine configuration
	I0327 16:53:34.534650   10333 fix.go:54] fixHost starting: 
	I0327 16:53:34.534765   10333 fix.go:112] recreateIfNeeded on old-k8s-version-386000: state=Stopped err=<nil>
	W0327 16:53:34.534773   10333 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 16:53:34.537944   10333 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-386000" ...
	I0327 16:53:34.546068   10333 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/old-k8s-version-386000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/old-k8s-version-386000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/old-k8s-version-386000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:dd:62:39:53:90 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/old-k8s-version-386000/disk.qcow2
	I0327 16:53:34.548110   10333 main.go:141] libmachine: STDOUT: 
	I0327 16:53:34.548130   10333 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:53:34.548159   10333 fix.go:56] duration metric: took 13.509583ms for fixHost
	I0327 16:53:34.548163   10333 start.go:83] releasing machines lock for "old-k8s-version-386000", held for 13.522959ms
	W0327 16:53:34.548170   10333 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 16:53:34.548208   10333 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:53:34.548213   10333 start.go:728] Will try again in 5 seconds ...
	I0327 16:53:39.550262   10333 start.go:360] acquireMachinesLock for old-k8s-version-386000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:53:39.550577   10333 start.go:364] duration metric: took 223.917µs to acquireMachinesLock for "old-k8s-version-386000"
	I0327 16:53:39.550661   10333 start.go:96] Skipping create...Using existing machine configuration
	I0327 16:53:39.550674   10333 fix.go:54] fixHost starting: 
	I0327 16:53:39.551122   10333 fix.go:112] recreateIfNeeded on old-k8s-version-386000: state=Stopped err=<nil>
	W0327 16:53:39.551140   10333 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 16:53:39.559582   10333 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-386000" ...
	I0327 16:53:39.563731   10333 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/old-k8s-version-386000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/old-k8s-version-386000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/old-k8s-version-386000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:dd:62:39:53:90 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/old-k8s-version-386000/disk.qcow2
	I0327 16:53:39.571837   10333 main.go:141] libmachine: STDOUT: 
	I0327 16:53:39.571913   10333 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:53:39.571963   10333 fix.go:56] duration metric: took 21.290125ms for fixHost
	I0327 16:53:39.571974   10333 start.go:83] releasing machines lock for "old-k8s-version-386000", held for 21.381459ms
	W0327 16:53:39.572098   10333 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-386000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-386000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:53:39.579627   10333 out.go:177] 
	W0327 16:53:39.583649   10333 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 16:53:39.583666   10333 out.go:239] * 
	* 
	W0327 16:53:39.585529   10333 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 16:53:39.595522   10333 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-386000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-386000 -n old-k8s-version-386000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-386000 -n old-k8s-version-386000: exit status 7 (50.896208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-386000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-386000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-386000 -n old-k8s-version-386000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-386000 -n old-k8s-version-386000: exit status 7 (32.379667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-386000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-386000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-386000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-386000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.695416ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-386000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-386000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-386000 -n old-k8s-version-386000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-386000 -n old-k8s-version-386000: exit status 7 (31.609625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-386000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-386000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-386000 -n old-k8s-version-386000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-386000 -n old-k8s-version-386000: exit status 7 (31.796875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-386000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-386000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-386000 --alsologtostderr -v=1: exit status 83 (43.889167ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-386000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-386000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:53:39.853830   10357 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:53:39.855035   10357 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:53:39.855038   10357 out.go:304] Setting ErrFile to fd 2...
	I0327 16:53:39.855041   10357 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:53:39.855185   10357 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:53:39.855420   10357 out.go:298] Setting JSON to false
	I0327 16:53:39.855430   10357 mustload.go:65] Loading cluster: old-k8s-version-386000
	I0327 16:53:39.855612   10357 config.go:182] Loaded profile config "old-k8s-version-386000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0327 16:53:39.859937   10357 out.go:177] * The control-plane node old-k8s-version-386000 host is not running: state=Stopped
	I0327 16:53:39.864095   10357 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-386000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-386000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-386000 -n old-k8s-version-386000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-386000 -n old-k8s-version-386000: exit status 7 (30.9785ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-386000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-386000 -n old-k8s-version-386000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-386000 -n old-k8s-version-386000: exit status 7 (31.196375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-386000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-646000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-646000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0-beta.0: exit status 80 (9.854751s)

                                                
                                                
-- stdout --
	* [no-preload-646000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-646000" primary control-plane node in "no-preload-646000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-646000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:53:40.327875   10380 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:53:40.327996   10380 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:53:40.328000   10380 out.go:304] Setting ErrFile to fd 2...
	I0327 16:53:40.328003   10380 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:53:40.328131   10380 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:53:40.329231   10380 out.go:298] Setting JSON to false
	I0327 16:53:40.345782   10380 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6791,"bootTime":1711576829,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 16:53:40.345846   10380 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 16:53:40.349544   10380 out.go:177] * [no-preload-646000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 16:53:40.356674   10380 out.go:177]   - MINIKUBE_LOCATION=18485
	I0327 16:53:40.360541   10380 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:53:40.356737   10380 notify.go:220] Checking for updates...
	I0327 16:53:40.366573   10380 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 16:53:40.369509   10380 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 16:53:40.372549   10380 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	I0327 16:53:40.375554   10380 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 16:53:40.378814   10380 config.go:182] Loaded profile config "multinode-266000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:53:40.378879   10380 config.go:182] Loaded profile config "stopped-upgrade-017000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0327 16:53:40.378931   10380 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 16:53:40.383535   10380 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 16:53:40.389525   10380 start.go:297] selected driver: qemu2
	I0327 16:53:40.389532   10380 start.go:901] validating driver "qemu2" against <nil>
	I0327 16:53:40.389537   10380 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 16:53:40.391998   10380 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 16:53:40.395510   10380 out.go:177] * Automatically selected the socket_vmnet network
	I0327 16:53:40.398644   10380 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 16:53:40.398674   10380 cni.go:84] Creating CNI manager for ""
	I0327 16:53:40.398680   10380 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 16:53:40.398684   10380 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 16:53:40.398711   10380 start.go:340] cluster config:
	{Name:no-preload-646000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:no-preload-646000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vm
net/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:53:40.403199   10380 iso.go:125] acquiring lock: {Name:mk26a7f41e004242919e6d5076ec0f9645f820ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:53:40.407550   10380 out.go:177] * Starting "no-preload-646000" primary control-plane node in "no-preload-646000" cluster
	I0327 16:53:40.415516   10380 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime docker
	I0327 16:53:40.415576   10380 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/no-preload-646000/config.json ...
	I0327 16:53:40.415593   10380 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/no-preload-646000/config.json: {Name:mke712263f710caf7646842a941377ac896843de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 16:53:40.415628   10380 cache.go:107] acquiring lock: {Name:mk6a81e1e3dd88a2a0389ef0a64b9a2e49efa8b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:53:40.415639   10380 cache.go:107] acquiring lock: {Name:mk201fb92d0b2962e142f12c9ebf58826d55299c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:53:40.415685   10380 cache.go:115] /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0327 16:53:40.415692   10380 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 67.209µs
	I0327 16:53:40.415699   10380 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0327 16:53:40.415704   10380 cache.go:107] acquiring lock: {Name:mkc00b93b3e2c4a1d551a708dbc31bbbabcebe65 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:53:40.415707   10380 cache.go:107] acquiring lock: {Name:mk0d72a4dcc87e9ba83cbe62ef7dec9d75dcf83e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:53:40.415802   10380 cache.go:107] acquiring lock: {Name:mkce7d520389d5ae3dd8fe16aeb089e3f517557b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:53:40.415798   10380 cache.go:107] acquiring lock: {Name:mk739a9dc85ae9465a9c9dcb5c673c82570aa62a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:53:40.415817   10380 start.go:360] acquireMachinesLock for no-preload-646000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:53:40.415961   10380 start.go:364] duration metric: took 25.75µs to acquireMachinesLock for "no-preload-646000"
	I0327 16:53:40.415968   10380 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0327 16:53:40.415853   10380 cache.go:107] acquiring lock: {Name:mk8ac76a2c02590722fb74ff656ba7f338769896 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:53:40.415991   10380 cache.go:107] acquiring lock: {Name:mk8519618f27dd3500bd2ceab659036c27a5c843 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:53:40.416035   10380 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0327 16:53:40.416061   10380 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0327 16:53:40.416067   10380 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0327 16:53:40.415975   10380 start.go:93] Provisioning new machine with config: &{Name:no-preload-646000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0-beta.0 ClusterName:no-preload-646000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 16:53:40.416085   10380 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 16:53:40.416066   10380 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0327 16:53:40.416089   10380 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0327 16:53:40.420384   10380 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 16:53:40.416244   10380 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0327 16:53:40.427508   10380 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0327 16:53:40.427575   10380 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0327 16:53:40.427687   10380 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0327 16:53:40.427721   10380 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0327 16:53:40.430493   10380 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0327 16:53:40.430531   10380 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0327 16:53:40.430589   10380 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0327 16:53:40.435351   10380 start.go:159] libmachine.API.Create for "no-preload-646000" (driver="qemu2")
	I0327 16:53:40.435380   10380 client.go:168] LocalClient.Create starting
	I0327 16:53:40.435449   10380 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem
	I0327 16:53:40.435482   10380 main.go:141] libmachine: Decoding PEM data...
	I0327 16:53:40.435494   10380 main.go:141] libmachine: Parsing certificate...
	I0327 16:53:40.435550   10380 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem
	I0327 16:53:40.435571   10380 main.go:141] libmachine: Decoding PEM data...
	I0327 16:53:40.435577   10380 main.go:141] libmachine: Parsing certificate...
	I0327 16:53:40.435952   10380 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18485-6511/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0327 16:53:40.593960   10380 main.go:141] libmachine: Creating SSH key...
	I0327 16:53:40.744084   10380 main.go:141] libmachine: Creating Disk image...
	I0327 16:53:40.744102   10380 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 16:53:40.744287   10380 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/no-preload-646000/disk.qcow2.raw /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/no-preload-646000/disk.qcow2
	I0327 16:53:40.756667   10380 main.go:141] libmachine: STDOUT: 
	I0327 16:53:40.756681   10380 main.go:141] libmachine: STDERR: 
	I0327 16:53:40.756735   10380 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/no-preload-646000/disk.qcow2 +20000M
	I0327 16:53:40.767675   10380 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 16:53:40.767701   10380 main.go:141] libmachine: STDERR: 
	I0327 16:53:40.767722   10380 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/no-preload-646000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/no-preload-646000/disk.qcow2
	I0327 16:53:40.767729   10380 main.go:141] libmachine: Starting QEMU VM...
	I0327 16:53:40.767772   10380 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/no-preload-646000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/no-preload-646000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/no-preload-646000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:97:c2:4d:90:3e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/no-preload-646000/disk.qcow2
	I0327 16:53:40.769870   10380 main.go:141] libmachine: STDOUT: 
	I0327 16:53:40.769889   10380 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:53:40.769909   10380 client.go:171] duration metric: took 334.534875ms to LocalClient.Create
	I0327 16:53:42.770561   10380 start.go:128] duration metric: took 2.35452275s to createHost
	I0327 16:53:42.770623   10380 start.go:83] releasing machines lock for "no-preload-646000", held for 2.354728541s
	W0327 16:53:42.770716   10380 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:53:42.788675   10380 out.go:177] * Deleting "no-preload-646000" in qemu2 ...
	W0327 16:53:42.819926   10380 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:53:42.819949   10380 start.go:728] Will try again in 5 seconds ...
	I0327 16:53:42.959734   10380 cache.go:162] opening:  /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0
	I0327 16:53:43.092615   10380 cache.go:162] opening:  /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0327 16:53:43.101116   10380 cache.go:162] opening:  /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0327 16:53:43.112252   10380 cache.go:162] opening:  /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-beta.0
	I0327 16:53:43.123928   10380 cache.go:162] opening:  /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0
	I0327 16:53:43.129301   10380 cache.go:162] opening:  /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0
	I0327 16:53:43.133643   10380 cache.go:162] opening:  /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0
	I0327 16:53:43.327106   10380 cache.go:157] /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0327 16:53:43.327117   10380 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 2.911426166s
	I0327 16:53:43.327123   10380 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0327 16:53:45.887100   10380 cache.go:157] /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0 exists
	I0327 16:53:45.887164   10380 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.0-beta.0" -> "/Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0" took 5.471487292s
	I0327 16:53:45.887201   10380 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.0-beta.0 -> /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0 succeeded
	I0327 16:53:46.170970   10380 cache.go:157] /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0327 16:53:46.171025   10380 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 5.755500041s
	I0327 16:53:46.171053   10380 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0327 16:53:46.509538   10380 cache.go:157] /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-beta.0 exists
	I0327 16:53:46.509640   10380 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.0-beta.0" -> "/Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-beta.0" took 6.094123792s
	I0327 16:53:46.509673   10380 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.0-beta.0 -> /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-beta.0 succeeded
	I0327 16:53:46.625830   10380 cache.go:157] /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0 exists
	I0327 16:53:46.625881   10380 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.0-beta.0" -> "/Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0" took 6.210345792s
	I0327 16:53:46.625930   10380 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.0-beta.0 -> /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0 succeeded
	I0327 16:53:46.997096   10380 cache.go:157] /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0 exists
	I0327 16:53:46.997155   10380 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.0-beta.0" -> "/Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0" took 6.581737625s
	I0327 16:53:46.997183   10380 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.0-beta.0 -> /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0 succeeded
	I0327 16:53:47.820016   10380 start.go:360] acquireMachinesLock for no-preload-646000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:53:47.820400   10380 start.go:364] duration metric: took 305.958µs to acquireMachinesLock for "no-preload-646000"
	I0327 16:53:47.820564   10380 start.go:93] Provisioning new machine with config: &{Name:no-preload-646000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0-beta.0 ClusterName:no-preload-646000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 16:53:47.820869   10380 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 16:53:47.831504   10380 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 16:53:47.881425   10380 start.go:159] libmachine.API.Create for "no-preload-646000" (driver="qemu2")
	I0327 16:53:47.881483   10380 client.go:168] LocalClient.Create starting
	I0327 16:53:47.881587   10380 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem
	I0327 16:53:47.881647   10380 main.go:141] libmachine: Decoding PEM data...
	I0327 16:53:47.881667   10380 main.go:141] libmachine: Parsing certificate...
	I0327 16:53:47.881738   10380 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem
	I0327 16:53:47.881779   10380 main.go:141] libmachine: Decoding PEM data...
	I0327 16:53:47.881794   10380 main.go:141] libmachine: Parsing certificate...
	I0327 16:53:47.882289   10380 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18485-6511/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0327 16:53:48.031700   10380 main.go:141] libmachine: Creating SSH key...
	I0327 16:53:48.074879   10380 main.go:141] libmachine: Creating Disk image...
	I0327 16:53:48.074884   10380 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 16:53:48.075047   10380 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/no-preload-646000/disk.qcow2.raw /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/no-preload-646000/disk.qcow2
	I0327 16:53:48.087645   10380 main.go:141] libmachine: STDOUT: 
	I0327 16:53:48.087667   10380 main.go:141] libmachine: STDERR: 
	I0327 16:53:48.087734   10380 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/no-preload-646000/disk.qcow2 +20000M
	I0327 16:53:48.098724   10380 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 16:53:48.098741   10380 main.go:141] libmachine: STDERR: 
	I0327 16:53:48.098760   10380 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/no-preload-646000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/no-preload-646000/disk.qcow2
	I0327 16:53:48.098764   10380 main.go:141] libmachine: Starting QEMU VM...
	I0327 16:53:48.098801   10380 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/no-preload-646000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/no-preload-646000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/no-preload-646000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:ab:ff:a8:d3:c3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/no-preload-646000/disk.qcow2
	I0327 16:53:48.100666   10380 main.go:141] libmachine: STDOUT: 
	I0327 16:53:48.100741   10380 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:53:48.100759   10380 client.go:171] duration metric: took 219.278917ms to LocalClient.Create
	I0327 16:53:50.101023   10380 start.go:128] duration metric: took 2.280166667s to createHost
	I0327 16:53:50.101096   10380 start.go:83] releasing machines lock for "no-preload-646000", held for 2.280745s
	W0327 16:53:50.101460   10380 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-646000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-646000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:53:50.115872   10380 out.go:177] 
	W0327 16:53:50.120009   10380 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 16:53:50.120039   10380 out.go:239] * 
	* 
	W0327 16:53:50.122848   10380 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 16:53:50.133818   10380 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-646000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-646000 -n no-preload-646000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-646000 -n no-preload-646000: exit status 7 (67.301833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-646000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-415000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.29.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-415000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.29.3: exit status 80 (11.474783542s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-415000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-415000" primary control-plane node in "default-k8s-diff-port-415000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-415000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:53:41.335545   10422 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:53:41.335664   10422 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:53:41.335667   10422 out.go:304] Setting ErrFile to fd 2...
	I0327 16:53:41.335669   10422 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:53:41.335803   10422 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:53:41.336838   10422 out.go:298] Setting JSON to false
	I0327 16:53:41.353221   10422 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6792,"bootTime":1711576829,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 16:53:41.353292   10422 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 16:53:41.356979   10422 out.go:177] * [default-k8s-diff-port-415000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 16:53:41.363908   10422 out.go:177]   - MINIKUBE_LOCATION=18485
	I0327 16:53:41.363940   10422 notify.go:220] Checking for updates...
	I0327 16:53:41.370961   10422 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:53:41.373980   10422 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 16:53:41.376985   10422 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 16:53:41.380011   10422 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	I0327 16:53:41.382979   10422 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 16:53:41.386321   10422 config.go:182] Loaded profile config "multinode-266000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:53:41.386389   10422 config.go:182] Loaded profile config "no-preload-646000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0-beta.0
	I0327 16:53:41.386442   10422 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 16:53:41.390979   10422 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 16:53:41.397918   10422 start.go:297] selected driver: qemu2
	I0327 16:53:41.397924   10422 start.go:901] validating driver "qemu2" against <nil>
	I0327 16:53:41.397932   10422 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 16:53:41.400247   10422 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 16:53:41.402970   10422 out.go:177] * Automatically selected the socket_vmnet network
	I0327 16:53:41.405940   10422 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 16:53:41.406001   10422 cni.go:84] Creating CNI manager for ""
	I0327 16:53:41.406008   10422 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 16:53:41.406012   10422 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 16:53:41.406047   10422 start.go:340] cluster config:
	{Name:default-k8s-diff-port-415000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-415000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:53:41.410637   10422 iso.go:125] acquiring lock: {Name:mk26a7f41e004242919e6d5076ec0f9645f820ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:53:41.417948   10422 out.go:177] * Starting "default-k8s-diff-port-415000" primary control-plane node in "default-k8s-diff-port-415000" cluster
	I0327 16:53:41.421947   10422 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 16:53:41.421964   10422 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 16:53:41.421983   10422 cache.go:56] Caching tarball of preloaded images
	I0327 16:53:41.422042   10422 preload.go:173] Found /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 16:53:41.422047   10422 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 16:53:41.422131   10422 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/default-k8s-diff-port-415000/config.json ...
	I0327 16:53:41.422142   10422 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/default-k8s-diff-port-415000/config.json: {Name:mk0e23010b7ae9f2174e447664c91de2a3644e27 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 16:53:41.422362   10422 start.go:360] acquireMachinesLock for default-k8s-diff-port-415000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:53:42.770775   10422 start.go:364] duration metric: took 1.348412875s to acquireMachinesLock for "default-k8s-diff-port-415000"
	I0327 16:53:42.770942   10422 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-415000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-415000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 16:53:42.771249   10422 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 16:53:42.780628   10422 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 16:53:42.830743   10422 start.go:159] libmachine.API.Create for "default-k8s-diff-port-415000" (driver="qemu2")
	I0327 16:53:42.830787   10422 client.go:168] LocalClient.Create starting
	I0327 16:53:42.830929   10422 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem
	I0327 16:53:42.830985   10422 main.go:141] libmachine: Decoding PEM data...
	I0327 16:53:42.831007   10422 main.go:141] libmachine: Parsing certificate...
	I0327 16:53:42.831087   10422 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem
	I0327 16:53:42.831129   10422 main.go:141] libmachine: Decoding PEM data...
	I0327 16:53:42.831144   10422 main.go:141] libmachine: Parsing certificate...
	I0327 16:53:42.831855   10422 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18485-6511/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0327 16:53:42.990267   10422 main.go:141] libmachine: Creating SSH key...
	I0327 16:53:43.316734   10422 main.go:141] libmachine: Creating Disk image...
	I0327 16:53:43.316745   10422 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 16:53:43.316934   10422 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/default-k8s-diff-port-415000/disk.qcow2.raw /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/default-k8s-diff-port-415000/disk.qcow2
	I0327 16:53:43.329828   10422 main.go:141] libmachine: STDOUT: 
	I0327 16:53:43.329851   10422 main.go:141] libmachine: STDERR: 
	I0327 16:53:43.329920   10422 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/default-k8s-diff-port-415000/disk.qcow2 +20000M
	I0327 16:53:43.341071   10422 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 16:53:43.341088   10422 main.go:141] libmachine: STDERR: 
	I0327 16:53:43.341120   10422 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/default-k8s-diff-port-415000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/default-k8s-diff-port-415000/disk.qcow2
	I0327 16:53:43.341128   10422 main.go:141] libmachine: Starting QEMU VM...
	I0327 16:53:43.341160   10422 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/default-k8s-diff-port-415000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/default-k8s-diff-port-415000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/default-k8s-diff-port-415000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6e:41:59:6e:e3:f7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/default-k8s-diff-port-415000/disk.qcow2
	I0327 16:53:43.342963   10422 main.go:141] libmachine: STDOUT: 
	I0327 16:53:43.342978   10422 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:53:43.343003   10422 client.go:171] duration metric: took 512.225ms to LocalClient.Create
	I0327 16:53:45.345150   10422 start.go:128] duration metric: took 2.573951917s to createHost
	I0327 16:53:45.345205   10422 start.go:83] releasing machines lock for "default-k8s-diff-port-415000", held for 2.574415584s
	W0327 16:53:45.345299   10422 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:53:45.363025   10422 out.go:177] * Deleting "default-k8s-diff-port-415000" in qemu2 ...
	W0327 16:53:45.393706   10422 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:53:45.393736   10422 start.go:728] Will try again in 5 seconds ...
	I0327 16:53:50.395639   10422 start.go:360] acquireMachinesLock for default-k8s-diff-port-415000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:53:50.395719   10422 start.go:364] duration metric: took 61.125µs to acquireMachinesLock for "default-k8s-diff-port-415000"
	I0327 16:53:50.395745   10422 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-415000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-415000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 16:53:50.395791   10422 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 16:53:50.403030   10422 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 16:53:50.418984   10422 start.go:159] libmachine.API.Create for "default-k8s-diff-port-415000" (driver="qemu2")
	I0327 16:53:50.419025   10422 client.go:168] LocalClient.Create starting
	I0327 16:53:50.419130   10422 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem
	I0327 16:53:50.419161   10422 main.go:141] libmachine: Decoding PEM data...
	I0327 16:53:50.419170   10422 main.go:141] libmachine: Parsing certificate...
	I0327 16:53:50.419211   10422 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem
	I0327 16:53:50.419225   10422 main.go:141] libmachine: Decoding PEM data...
	I0327 16:53:50.419234   10422 main.go:141] libmachine: Parsing certificate...
	I0327 16:53:50.419554   10422 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18485-6511/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0327 16:53:50.579005   10422 main.go:141] libmachine: Creating SSH key...
	I0327 16:53:50.709056   10422 main.go:141] libmachine: Creating Disk image...
	I0327 16:53:50.709061   10422 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 16:53:50.709230   10422 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/default-k8s-diff-port-415000/disk.qcow2.raw /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/default-k8s-diff-port-415000/disk.qcow2
	I0327 16:53:50.721402   10422 main.go:141] libmachine: STDOUT: 
	I0327 16:53:50.721419   10422 main.go:141] libmachine: STDERR: 
	I0327 16:53:50.721482   10422 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/default-k8s-diff-port-415000/disk.qcow2 +20000M
	I0327 16:53:50.732809   10422 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 16:53:50.732847   10422 main.go:141] libmachine: STDERR: 
	I0327 16:53:50.732856   10422 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/default-k8s-diff-port-415000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/default-k8s-diff-port-415000/disk.qcow2
	I0327 16:53:50.732862   10422 main.go:141] libmachine: Starting QEMU VM...
	I0327 16:53:50.732899   10422 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/default-k8s-diff-port-415000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/default-k8s-diff-port-415000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/default-k8s-diff-port-415000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:37:02:12:59:06 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/default-k8s-diff-port-415000/disk.qcow2
	I0327 16:53:50.734872   10422 main.go:141] libmachine: STDOUT: 
	I0327 16:53:50.734897   10422 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:53:50.734912   10422 client.go:171] duration metric: took 315.868083ms to LocalClient.Create
	I0327 16:53:52.737199   10422 start.go:128] duration metric: took 2.34142875s to createHost
	I0327 16:53:52.737265   10422 start.go:83] releasing machines lock for "default-k8s-diff-port-415000", held for 2.341612208s
	W0327 16:53:52.737680   10422 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-415000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-415000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:53:52.750157   10422 out.go:177] 
	W0327 16:53:52.754392   10422 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 16:53:52.754443   10422 out.go:239] * 
	* 
	W0327 16:53:52.757084   10422 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 16:53:52.764328   10422 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-415000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.29.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-415000 -n default-k8s-diff-port-415000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-415000 -n default-k8s-diff-port-415000: exit status 7 (68.463458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-415000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (11.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-646000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-646000 create -f testdata/busybox.yaml: exit status 1 (29.091042ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-646000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-646000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-646000 -n no-preload-646000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-646000 -n no-preload-646000: exit status 7 (31.463875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-646000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-646000 -n no-preload-646000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-646000 -n no-preload-646000: exit status 7 (30.903958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-646000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-646000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-646000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-646000 describe deploy/metrics-server -n kube-system: exit status 1 (27.5635ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-646000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-646000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-646000 -n no-preload-646000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-646000 -n no-preload-646000: exit status 7 (33.261417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-646000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-415000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-415000 create -f testdata/busybox.yaml: exit status 1 (29.730583ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-415000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-415000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-415000 -n default-k8s-diff-port-415000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-415000 -n default-k8s-diff-port-415000: exit status 7 (30.83025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-415000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-415000 -n default-k8s-diff-port-415000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-415000 -n default-k8s-diff-port-415000: exit status 7 (30.518ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-415000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-415000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-415000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-415000 describe deploy/metrics-server -n kube-system: exit status 1 (26.761791ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-415000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-415000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-415000 -n default-k8s-diff-port-415000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-415000 -n default-k8s-diff-port-415000: exit status 7 (29.639625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-415000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-646000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-646000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0-beta.0: exit status 80 (5.197629083s)

                                                
                                                
-- stdout --
	* [no-preload-646000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-646000" primary control-plane node in "no-preload-646000" cluster
	* Restarting existing qemu2 VM for "no-preload-646000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-646000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:53:54.036588   10493 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:53:54.036724   10493 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:53:54.036728   10493 out.go:304] Setting ErrFile to fd 2...
	I0327 16:53:54.036730   10493 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:53:54.036859   10493 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:53:54.038095   10493 out.go:298] Setting JSON to false
	I0327 16:53:54.054383   10493 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6805,"bootTime":1711576829,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 16:53:54.054447   10493 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 16:53:54.059240   10493 out.go:177] * [no-preload-646000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 16:53:54.071118   10493 out.go:177]   - MINIKUBE_LOCATION=18485
	I0327 16:53:54.066318   10493 notify.go:220] Checking for updates...
	I0327 16:53:54.079220   10493 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:53:54.087186   10493 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 16:53:54.090269   10493 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 16:53:54.093247   10493 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	I0327 16:53:54.096224   10493 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 16:53:54.099539   10493 config.go:182] Loaded profile config "no-preload-646000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0-beta.0
	I0327 16:53:54.099813   10493 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 16:53:54.104225   10493 out.go:177] * Using the qemu2 driver based on existing profile
	I0327 16:53:54.111286   10493 start.go:297] selected driver: qemu2
	I0327 16:53:54.111292   10493 start.go:901] validating driver "qemu2" against &{Name:no-preload-646000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0-beta.0 ClusterName:no-preload-646000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:53:54.111363   10493 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 16:53:54.113940   10493 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 16:53:54.113983   10493 cni.go:84] Creating CNI manager for ""
	I0327 16:53:54.113992   10493 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 16:53:54.114018   10493 start.go:340] cluster config:
	{Name:no-preload-646000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:no-preload-646000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:53:54.118676   10493 iso.go:125] acquiring lock: {Name:mk26a7f41e004242919e6d5076ec0f9645f820ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:53:54.126231   10493 out.go:177] * Starting "no-preload-646000" primary control-plane node in "no-preload-646000" cluster
	I0327 16:53:54.130202   10493 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime docker
	I0327 16:53:54.130277   10493 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/no-preload-646000/config.json ...
	I0327 16:53:54.130305   10493 cache.go:107] acquiring lock: {Name:mk6a81e1e3dd88a2a0389ef0a64b9a2e49efa8b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:53:54.130350   10493 cache.go:107] acquiring lock: {Name:mk8ac76a2c02590722fb74ff656ba7f338769896 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:53:54.130381   10493 cache.go:107] acquiring lock: {Name:mk201fb92d0b2962e142f12c9ebf58826d55299c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:53:54.130395   10493 cache.go:107] acquiring lock: {Name:mk0d72a4dcc87e9ba83cbe62ef7dec9d75dcf83e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:53:54.130344   10493 cache.go:107] acquiring lock: {Name:mkce7d520389d5ae3dd8fe16aeb089e3f517557b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:53:54.130442   10493 cache.go:115] /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-beta.0 exists
	I0327 16:53:54.130449   10493 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.0-beta.0" -> "/Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-beta.0" took 54.208µs
	I0327 16:53:54.130454   10493 cache.go:115] /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0327 16:53:54.130456   10493 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.0-beta.0 -> /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-beta.0 succeeded
	I0327 16:53:54.130385   10493 cache.go:115] /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0327 16:53:54.130459   10493 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 124.5µs
	I0327 16:53:54.130464   10493 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0327 16:53:54.130454   10493 cache.go:115] /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0 exists
	I0327 16:53:54.130472   10493 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.0-beta.0" -> "/Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0" took 99.625µs
	I0327 16:53:54.130476   10493 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.0-beta.0 -> /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0 succeeded
	I0327 16:53:54.130463   10493 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 161.167µs
	I0327 16:53:54.130481   10493 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0327 16:53:54.130513   10493 cache.go:107] acquiring lock: {Name:mkc00b93b3e2c4a1d551a708dbc31bbbabcebe65 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:53:54.130528   10493 cache.go:107] acquiring lock: {Name:mk8519618f27dd3500bd2ceab659036c27a5c843 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:53:54.130531   10493 cache.go:107] acquiring lock: {Name:mk739a9dc85ae9465a9c9dcb5c673c82570aa62a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:53:54.130554   10493 cache.go:115] /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0 exists
	I0327 16:53:54.130561   10493 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.0-beta.0" -> "/Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0" took 257.542µs
	I0327 16:53:54.130566   10493 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.0-beta.0 -> /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0 succeeded
	I0327 16:53:54.130583   10493 cache.go:115] /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0 exists
	I0327 16:53:54.130588   10493 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.0-beta.0" -> "/Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0" took 132.834µs
	I0327 16:53:54.130595   10493 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.0-beta.0 -> /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0 succeeded
	I0327 16:53:54.130600   10493 cache.go:115] /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0327 16:53:54.130605   10493 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0327 16:53:54.130607   10493 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 155.959µs
	I0327 16:53:54.130613   10493 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0327 16:53:54.130730   10493 start.go:360] acquireMachinesLock for no-preload-646000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:53:54.130758   10493 start.go:364] duration metric: took 21.458µs to acquireMachinesLock for "no-preload-646000"
	I0327 16:53:54.130768   10493 start.go:96] Skipping create...Using existing machine configuration
	I0327 16:53:54.130774   10493 fix.go:54] fixHost starting: 
	I0327 16:53:54.130903   10493 fix.go:112] recreateIfNeeded on no-preload-646000: state=Stopped err=<nil>
	W0327 16:53:54.130912   10493 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 16:53:54.138146   10493 out.go:177] * Restarting existing qemu2 VM for "no-preload-646000" ...
	I0327 16:53:54.138896   10493 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0327 16:53:54.142334   10493 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/no-preload-646000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/no-preload-646000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/no-preload-646000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:ab:ff:a8:d3:c3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/no-preload-646000/disk.qcow2
	I0327 16:53:54.144573   10493 main.go:141] libmachine: STDOUT: 
	I0327 16:53:54.144611   10493 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:53:54.144637   10493 fix.go:56] duration metric: took 13.863208ms for fixHost
	I0327 16:53:54.144642   10493 start.go:83] releasing machines lock for "no-preload-646000", held for 13.880209ms
	W0327 16:53:54.144648   10493 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 16:53:54.144682   10493 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:53:54.144687   10493 start.go:728] Will try again in 5 seconds ...
	I0327 16:53:56.082190   10493 cache.go:162] opening:  /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0
	I0327 16:53:59.144936   10493 start.go:360] acquireMachinesLock for no-preload-646000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:53:59.145278   10493 start.go:364] duration metric: took 249.667µs to acquireMachinesLock for "no-preload-646000"
	I0327 16:53:59.145410   10493 start.go:96] Skipping create...Using existing machine configuration
	I0327 16:53:59.145443   10493 fix.go:54] fixHost starting: 
	I0327 16:53:59.146113   10493 fix.go:112] recreateIfNeeded on no-preload-646000: state=Stopped err=<nil>
	W0327 16:53:59.146140   10493 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 16:53:59.149522   10493 out.go:177] * Restarting existing qemu2 VM for "no-preload-646000" ...
	I0327 16:53:59.153772   10493 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/no-preload-646000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/no-preload-646000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/no-preload-646000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:ab:ff:a8:d3:c3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/no-preload-646000/disk.qcow2
	I0327 16:53:59.164379   10493 main.go:141] libmachine: STDOUT: 
	I0327 16:53:59.164517   10493 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:53:59.164595   10493 fix.go:56] duration metric: took 19.153ms for fixHost
	I0327 16:53:59.164615   10493 start.go:83] releasing machines lock for "no-preload-646000", held for 19.316333ms
	W0327 16:53:59.164839   10493 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-646000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-646000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:53:59.174604   10493 out.go:177] 
	W0327 16:53:59.178647   10493 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 16:53:59.178762   10493 out.go:239] * 
	* 
	W0327 16:53:59.181392   10493 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 16:53:59.189438   10493 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-646000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-646000 -n no-preload-646000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-646000 -n no-preload-646000: exit status 7 (66.306625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-646000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (7.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-415000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.29.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-415000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.29.3: exit status 80 (7.202473292s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-415000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-415000" primary control-plane node in "default-k8s-diff-port-415000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-415000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-415000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:53:55.201198   10512 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:53:55.201332   10512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:53:55.201335   10512 out.go:304] Setting ErrFile to fd 2...
	I0327 16:53:55.201337   10512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:53:55.201463   10512 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:53:55.202462   10512 out.go:298] Setting JSON to false
	I0327 16:53:55.218485   10512 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6806,"bootTime":1711576829,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 16:53:55.218547   10512 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 16:53:55.223030   10512 out.go:177] * [default-k8s-diff-port-415000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 16:53:55.230960   10512 out.go:177]   - MINIKUBE_LOCATION=18485
	I0327 16:53:55.232300   10512 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:53:55.231013   10512 notify.go:220] Checking for updates...
	I0327 16:53:55.238983   10512 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 16:53:55.242022   10512 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 16:53:55.245027   10512 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	I0327 16:53:55.247963   10512 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 16:53:55.251312   10512 config.go:182] Loaded profile config "default-k8s-diff-port-415000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:53:55.251585   10512 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 16:53:55.255912   10512 out.go:177] * Using the qemu2 driver based on existing profile
	I0327 16:53:55.262988   10512 start.go:297] selected driver: qemu2
	I0327 16:53:55.262995   10512 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-415000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-415000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:53:55.263074   10512 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 16:53:55.265346   10512 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 16:53:55.265391   10512 cni.go:84] Creating CNI manager for ""
	I0327 16:53:55.265398   10512 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 16:53:55.265427   10512 start.go:340] cluster config:
	{Name:default-k8s-diff-port-415000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-415000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:53:55.269714   10512 iso.go:125] acquiring lock: {Name:mk26a7f41e004242919e6d5076ec0f9645f820ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:53:55.277016   10512 out.go:177] * Starting "default-k8s-diff-port-415000" primary control-plane node in "default-k8s-diff-port-415000" cluster
	I0327 16:53:55.281009   10512 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 16:53:55.281024   10512 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 16:53:55.281032   10512 cache.go:56] Caching tarball of preloaded images
	I0327 16:53:55.281105   10512 preload.go:173] Found /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 16:53:55.281111   10512 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 16:53:55.281169   10512 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/default-k8s-diff-port-415000/config.json ...
	I0327 16:53:55.281648   10512 start.go:360] acquireMachinesLock for default-k8s-diff-port-415000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:53:55.281674   10512 start.go:364] duration metric: took 20.083µs to acquireMachinesLock for "default-k8s-diff-port-415000"
	I0327 16:53:55.281683   10512 start.go:96] Skipping create...Using existing machine configuration
	I0327 16:53:55.281688   10512 fix.go:54] fixHost starting: 
	I0327 16:53:55.281806   10512 fix.go:112] recreateIfNeeded on default-k8s-diff-port-415000: state=Stopped err=<nil>
	W0327 16:53:55.281815   10512 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 16:53:55.285018   10512 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-415000" ...
	I0327 16:53:55.293076   10512 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/default-k8s-diff-port-415000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/default-k8s-diff-port-415000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/default-k8s-diff-port-415000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:37:02:12:59:06 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/default-k8s-diff-port-415000/disk.qcow2
	I0327 16:53:55.295109   10512 main.go:141] libmachine: STDOUT: 
	I0327 16:53:55.295132   10512 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:53:55.295161   10512 fix.go:56] duration metric: took 13.474458ms for fixHost
	I0327 16:53:55.295165   10512 start.go:83] releasing machines lock for "default-k8s-diff-port-415000", held for 13.488042ms
	W0327 16:53:55.295173   10512 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 16:53:55.295206   10512 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:53:55.295211   10512 start.go:728] Will try again in 5 seconds ...
	I0327 16:54:00.295830   10512 start.go:360] acquireMachinesLock for default-k8s-diff-port-415000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:54:02.298947   10512 start.go:364] duration metric: took 2.003141375s to acquireMachinesLock for "default-k8s-diff-port-415000"
	I0327 16:54:02.299079   10512 start.go:96] Skipping create...Using existing machine configuration
	I0327 16:54:02.299137   10512 fix.go:54] fixHost starting: 
	I0327 16:54:02.299837   10512 fix.go:112] recreateIfNeeded on default-k8s-diff-port-415000: state=Stopped err=<nil>
	W0327 16:54:02.299862   10512 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 16:54:02.305485   10512 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-415000" ...
	I0327 16:54:02.319558   10512 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/default-k8s-diff-port-415000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/default-k8s-diff-port-415000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/default-k8s-diff-port-415000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ea:37:02:12:59:06 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/default-k8s-diff-port-415000/disk.qcow2
	I0327 16:54:02.330691   10512 main.go:141] libmachine: STDOUT: 
	I0327 16:54:02.330774   10512 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:54:02.330857   10512 fix.go:56] duration metric: took 31.726833ms for fixHost
	I0327 16:54:02.330875   10512 start.go:83] releasing machines lock for "default-k8s-diff-port-415000", held for 31.89375ms
	W0327 16:54:02.331128   10512 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-415000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-415000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:54:02.338299   10512 out.go:177] 
	W0327 16:54:02.343508   10512 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 16:54:02.343529   10512 out.go:239] * 
	* 
	W0327 16:54:02.345581   10512 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 16:54:02.358340   10512 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-415000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.29.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-415000 -n default-k8s-diff-port-415000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-415000 -n default-k8s-diff-port-415000: exit status 7 (61.629958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-415000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (7.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-646000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-646000 -n no-preload-646000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-646000 -n no-preload-646000: exit status 7 (33.134208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-646000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-646000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-646000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-646000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.465292ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-646000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-646000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-646000 -n no-preload-646000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-646000 -n no-preload-646000: exit status 7 (30.729833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-646000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-646000 image list --format=json
start_stop_delete_test.go:304: v1.30.0-beta.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.30.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.30.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.30.0-beta.0",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-646000 -n no-preload-646000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-646000 -n no-preload-646000: exit status 7 (30.795291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-646000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-646000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-646000 --alsologtostderr -v=1: exit status 83 (41.675375ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-646000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-646000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:53:59.465545   10531 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:53:59.465691   10531 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:53:59.465694   10531 out.go:304] Setting ErrFile to fd 2...
	I0327 16:53:59.465697   10531 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:53:59.465828   10531 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:53:59.466047   10531 out.go:298] Setting JSON to false
	I0327 16:53:59.466056   10531 mustload.go:65] Loading cluster: no-preload-646000
	I0327 16:53:59.466287   10531 config.go:182] Loaded profile config "no-preload-646000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0-beta.0
	I0327 16:53:59.470168   10531 out.go:177] * The control-plane node no-preload-646000 host is not running: state=Stopped
	I0327 16:53:59.473176   10531 out.go:177]   To start a cluster, run: "minikube start -p no-preload-646000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-646000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-646000 -n no-preload-646000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-646000 -n no-preload-646000: exit status 7 (30.559167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-646000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-646000 -n no-preload-646000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-646000 -n no-preload-646000: exit status 7 (30.965375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-646000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-791000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-791000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0-beta.0: exit status 80 (9.846281709s)

                                                
                                                
-- stdout --
	* [newest-cni-791000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-791000" primary control-plane node in "newest-cni-791000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-791000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:53:59.937757   10554 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:53:59.937891   10554 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:53:59.937894   10554 out.go:304] Setting ErrFile to fd 2...
	I0327 16:53:59.937897   10554 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:53:59.938039   10554 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:53:59.939116   10554 out.go:298] Setting JSON to false
	I0327 16:53:59.955150   10554 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6810,"bootTime":1711576829,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 16:53:59.955217   10554 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 16:53:59.959666   10554 out.go:177] * [newest-cni-791000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 16:53:59.966729   10554 out.go:177]   - MINIKUBE_LOCATION=18485
	I0327 16:53:59.966795   10554 notify.go:220] Checking for updates...
	I0327 16:53:59.970708   10554 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:53:59.973659   10554 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 16:53:59.976709   10554 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 16:53:59.979642   10554 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	I0327 16:53:59.982674   10554 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 16:53:59.986072   10554 config.go:182] Loaded profile config "default-k8s-diff-port-415000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:53:59.986131   10554 config.go:182] Loaded profile config "multinode-266000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:53:59.986176   10554 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 16:53:59.990631   10554 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 16:53:59.997731   10554 start.go:297] selected driver: qemu2
	I0327 16:53:59.997737   10554 start.go:901] validating driver "qemu2" against <nil>
	I0327 16:53:59.997743   10554 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 16:53:59.999866   10554 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0327 16:53:59.999898   10554 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0327 16:54:00.008659   10554 out.go:177] * Automatically selected the socket_vmnet network
	I0327 16:54:00.011795   10554 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0327 16:54:00.011823   10554 cni.go:84] Creating CNI manager for ""
	I0327 16:54:00.011830   10554 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 16:54:00.011834   10554 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 16:54:00.011869   10554 start.go:340] cluster config:
	{Name:newest-cni-791000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:newest-cni-791000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:54:00.016382   10554 iso.go:125] acquiring lock: {Name:mk26a7f41e004242919e6d5076ec0f9645f820ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:54:00.023658   10554 out.go:177] * Starting "newest-cni-791000" primary control-plane node in "newest-cni-791000" cluster
	I0327 16:54:00.026635   10554 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime docker
	I0327 16:54:00.026648   10554 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0327 16:54:00.026663   10554 cache.go:56] Caching tarball of preloaded images
	I0327 16:54:00.026722   10554 preload.go:173] Found /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 16:54:00.026730   10554 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-beta.0 on docker
	I0327 16:54:00.026796   10554 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/newest-cni-791000/config.json ...
	I0327 16:54:00.026807   10554 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/newest-cni-791000/config.json: {Name:mk225e996db9a5140ae1601e9a84eaf6b7ce45bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 16:54:00.027033   10554 start.go:360] acquireMachinesLock for newest-cni-791000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:54:00.027065   10554 start.go:364] duration metric: took 25.834µs to acquireMachinesLock for "newest-cni-791000"
	I0327 16:54:00.027079   10554 start.go:93] Provisioning new machine with config: &{Name:newest-cni-791000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0-beta.0 ClusterName:newest-cni-791000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 16:54:00.027125   10554 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 16:54:00.034581   10554 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 16:54:00.051301   10554 start.go:159] libmachine.API.Create for "newest-cni-791000" (driver="qemu2")
	I0327 16:54:00.051327   10554 client.go:168] LocalClient.Create starting
	I0327 16:54:00.051379   10554 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem
	I0327 16:54:00.051409   10554 main.go:141] libmachine: Decoding PEM data...
	I0327 16:54:00.051417   10554 main.go:141] libmachine: Parsing certificate...
	I0327 16:54:00.051462   10554 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem
	I0327 16:54:00.051483   10554 main.go:141] libmachine: Decoding PEM data...
	I0327 16:54:00.051494   10554 main.go:141] libmachine: Parsing certificate...
	I0327 16:54:00.051838   10554 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18485-6511/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0327 16:54:00.190627   10554 main.go:141] libmachine: Creating SSH key...
	I0327 16:54:00.270731   10554 main.go:141] libmachine: Creating Disk image...
	I0327 16:54:00.270737   10554 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 16:54:00.270905   10554 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/newest-cni-791000/disk.qcow2.raw /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/newest-cni-791000/disk.qcow2
	I0327 16:54:00.283693   10554 main.go:141] libmachine: STDOUT: 
	I0327 16:54:00.283717   10554 main.go:141] libmachine: STDERR: 
	I0327 16:54:00.283772   10554 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/newest-cni-791000/disk.qcow2 +20000M
	I0327 16:54:00.294671   10554 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 16:54:00.294687   10554 main.go:141] libmachine: STDERR: 
	I0327 16:54:00.294704   10554 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/newest-cni-791000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/newest-cni-791000/disk.qcow2
	I0327 16:54:00.294708   10554 main.go:141] libmachine: Starting QEMU VM...
	I0327 16:54:00.294744   10554 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/newest-cni-791000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/newest-cni-791000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/newest-cni-791000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:c4:6b:77:c6:96 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/newest-cni-791000/disk.qcow2
	I0327 16:54:00.296559   10554 main.go:141] libmachine: STDOUT: 
	I0327 16:54:00.296574   10554 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:54:00.296591   10554 client.go:171] duration metric: took 245.267375ms to LocalClient.Create
	I0327 16:54:02.298724   10554 start.go:128] duration metric: took 2.271650833s to createHost
	I0327 16:54:02.298811   10554 start.go:83] releasing machines lock for "newest-cni-791000", held for 2.27180975s
	W0327 16:54:02.298922   10554 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:54:02.315404   10554 out.go:177] * Deleting "newest-cni-791000" in qemu2 ...
	W0327 16:54:02.369591   10554 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:54:02.369632   10554 start.go:728] Will try again in 5 seconds ...
	I0327 16:54:07.371660   10554 start.go:360] acquireMachinesLock for newest-cni-791000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:54:07.372053   10554 start.go:364] duration metric: took 288.416µs to acquireMachinesLock for "newest-cni-791000"
	I0327 16:54:07.372191   10554 start.go:93] Provisioning new machine with config: &{Name:newest-cni-791000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0-beta.0 ClusterName:newest-cni-791000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 16:54:07.372500   10554 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 16:54:07.381272   10554 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 16:54:07.430997   10554 start.go:159] libmachine.API.Create for "newest-cni-791000" (driver="qemu2")
	I0327 16:54:07.431069   10554 client.go:168] LocalClient.Create starting
	I0327 16:54:07.431172   10554 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem
	I0327 16:54:07.431241   10554 main.go:141] libmachine: Decoding PEM data...
	I0327 16:54:07.431254   10554 main.go:141] libmachine: Parsing certificate...
	I0327 16:54:07.431319   10554 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem
	I0327 16:54:07.431363   10554 main.go:141] libmachine: Decoding PEM data...
	I0327 16:54:07.431376   10554 main.go:141] libmachine: Parsing certificate...
	I0327 16:54:07.432570   10554 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18485-6511/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0327 16:54:07.588388   10554 main.go:141] libmachine: Creating SSH key...
	I0327 16:54:07.678194   10554 main.go:141] libmachine: Creating Disk image...
	I0327 16:54:07.678200   10554 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 16:54:07.678361   10554 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/newest-cni-791000/disk.qcow2.raw /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/newest-cni-791000/disk.qcow2
	I0327 16:54:07.699015   10554 main.go:141] libmachine: STDOUT: 
	I0327 16:54:07.699044   10554 main.go:141] libmachine: STDERR: 
	I0327 16:54:07.699102   10554 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/newest-cni-791000/disk.qcow2 +20000M
	I0327 16:54:07.710217   10554 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 16:54:07.710235   10554 main.go:141] libmachine: STDERR: 
	I0327 16:54:07.710245   10554 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/newest-cni-791000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/newest-cni-791000/disk.qcow2
	I0327 16:54:07.710254   10554 main.go:141] libmachine: Starting QEMU VM...
	I0327 16:54:07.710292   10554 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/newest-cni-791000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/newest-cni-791000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/newest-cni-791000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:af:70:63:c0:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/newest-cni-791000/disk.qcow2
	I0327 16:54:07.711991   10554 main.go:141] libmachine: STDOUT: 
	I0327 16:54:07.712047   10554 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:54:07.712069   10554 client.go:171] duration metric: took 281.00425ms to LocalClient.Create
	I0327 16:54:09.714293   10554 start.go:128] duration metric: took 2.341816209s to createHost
	I0327 16:54:09.714367   10554 start.go:83] releasing machines lock for "newest-cni-791000", held for 2.342366333s
	W0327 16:54:09.714731   10554 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-791000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-791000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:54:09.724359   10554 out.go:177] 
	W0327 16:54:09.728372   10554 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 16:54:09.728408   10554 out.go:239] * 
	* 
	W0327 16:54:09.731028   10554 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 16:54:09.739331   10554 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-791000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-791000 -n newest-cni-791000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-791000 -n newest-cni-791000: exit status 7 (69.003541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-791000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-415000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-415000 -n default-k8s-diff-port-415000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-415000 -n default-k8s-diff-port-415000: exit status 7 (33.195125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-415000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-415000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-415000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-415000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.093625ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-415000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-415000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-415000 -n default-k8s-diff-port-415000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-415000 -n default-k8s-diff-port-415000: exit status 7 (31.741292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-415000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-415000 image list --format=json
start_stop_delete_test.go:304: v1.29.3 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.29.3",
- 	"registry.k8s.io/kube-controller-manager:v1.29.3",
- 	"registry.k8s.io/kube-proxy:v1.29.3",
- 	"registry.k8s.io/kube-scheduler:v1.29.3",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-415000 -n default-k8s-diff-port-415000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-415000 -n default-k8s-diff-port-415000: exit status 7 (29.719125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-415000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-415000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-415000 --alsologtostderr -v=1: exit status 83 (41.837833ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-415000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-415000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:54:02.630297   10576 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:54:02.630442   10576 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:54:02.630445   10576 out.go:304] Setting ErrFile to fd 2...
	I0327 16:54:02.630448   10576 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:54:02.630574   10576 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:54:02.630769   10576 out.go:298] Setting JSON to false
	I0327 16:54:02.630779   10576 mustload.go:65] Loading cluster: default-k8s-diff-port-415000
	I0327 16:54:02.630969   10576 config.go:182] Loaded profile config "default-k8s-diff-port-415000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:54:02.635396   10576 out.go:177] * The control-plane node default-k8s-diff-port-415000 host is not running: state=Stopped
	I0327 16:54:02.639283   10576 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-415000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-415000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-415000 -n default-k8s-diff-port-415000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-415000 -n default-k8s-diff-port-415000: exit status 7 (30.185917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-415000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-415000 -n default-k8s-diff-port-415000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-415000 -n default-k8s-diff-port-415000: exit status 7 (30.4645ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-415000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-201000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.29.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-201000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.29.3: exit status 80 (10.185084958s)

                                                
                                                
-- stdout --
	* [embed-certs-201000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-201000" primary control-plane node in "embed-certs-201000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-201000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:54:03.345355   10611 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:54:03.345509   10611 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:54:03.345512   10611 out.go:304] Setting ErrFile to fd 2...
	I0327 16:54:03.345514   10611 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:54:03.345636   10611 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:54:03.346683   10611 out.go:298] Setting JSON to false
	I0327 16:54:03.362861   10611 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6814,"bootTime":1711576829,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 16:54:03.362920   10611 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 16:54:03.367071   10611 out.go:177] * [embed-certs-201000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 16:54:03.381923   10611 out.go:177]   - MINIKUBE_LOCATION=18485
	I0327 16:54:03.378092   10611 notify.go:220] Checking for updates...
	I0327 16:54:03.390613   10611 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:54:03.392058   10611 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 16:54:03.395065   10611 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 16:54:03.398064   10611 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	I0327 16:54:03.401043   10611 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 16:54:03.404406   10611 config.go:182] Loaded profile config "multinode-266000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:54:03.404484   10611 config.go:182] Loaded profile config "newest-cni-791000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0-beta.0
	I0327 16:54:03.404562   10611 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 16:54:03.409017   10611 out.go:177] * Using the qemu2 driver based on user configuration
	I0327 16:54:03.415965   10611 start.go:297] selected driver: qemu2
	I0327 16:54:03.415970   10611 start.go:901] validating driver "qemu2" against <nil>
	I0327 16:54:03.415976   10611 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 16:54:03.418285   10611 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 16:54:03.422031   10611 out.go:177] * Automatically selected the socket_vmnet network
	I0327 16:54:03.425161   10611 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 16:54:03.425201   10611 cni.go:84] Creating CNI manager for ""
	I0327 16:54:03.425210   10611 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 16:54:03.425217   10611 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 16:54:03.425242   10611 start.go:340] cluster config:
	{Name:embed-certs-201000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:embed-certs-201000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:54:03.429848   10611 iso.go:125] acquiring lock: {Name:mk26a7f41e004242919e6d5076ec0f9645f820ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:54:03.437026   10611 out.go:177] * Starting "embed-certs-201000" primary control-plane node in "embed-certs-201000" cluster
	I0327 16:54:03.440015   10611 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 16:54:03.440032   10611 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 16:54:03.440045   10611 cache.go:56] Caching tarball of preloaded images
	I0327 16:54:03.440117   10611 preload.go:173] Found /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 16:54:03.440123   10611 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 16:54:03.440194   10611 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/embed-certs-201000/config.json ...
	I0327 16:54:03.440206   10611 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/embed-certs-201000/config.json: {Name:mk142e06481110af193c85d2c209c3ef6c1f3724 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 16:54:03.440452   10611 start.go:360] acquireMachinesLock for embed-certs-201000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:54:03.440487   10611 start.go:364] duration metric: took 28.834µs to acquireMachinesLock for "embed-certs-201000"
	I0327 16:54:03.440502   10611 start.go:93] Provisioning new machine with config: &{Name:embed-certs-201000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.3 ClusterName:embed-certs-201000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 16:54:03.440532   10611 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 16:54:03.447938   10611 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 16:54:03.466435   10611 start.go:159] libmachine.API.Create for "embed-certs-201000" (driver="qemu2")
	I0327 16:54:03.466470   10611 client.go:168] LocalClient.Create starting
	I0327 16:54:03.466535   10611 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem
	I0327 16:54:03.466565   10611 main.go:141] libmachine: Decoding PEM data...
	I0327 16:54:03.466575   10611 main.go:141] libmachine: Parsing certificate...
	I0327 16:54:03.466623   10611 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem
	I0327 16:54:03.466646   10611 main.go:141] libmachine: Decoding PEM data...
	I0327 16:54:03.466653   10611 main.go:141] libmachine: Parsing certificate...
	I0327 16:54:03.467040   10611 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18485-6511/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0327 16:54:03.608296   10611 main.go:141] libmachine: Creating SSH key...
	I0327 16:54:03.883711   10611 main.go:141] libmachine: Creating Disk image...
	I0327 16:54:03.883721   10611 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 16:54:03.883933   10611 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/embed-certs-201000/disk.qcow2.raw /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/embed-certs-201000/disk.qcow2
	I0327 16:54:03.896829   10611 main.go:141] libmachine: STDOUT: 
	I0327 16:54:03.896848   10611 main.go:141] libmachine: STDERR: 
	I0327 16:54:03.896903   10611 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/embed-certs-201000/disk.qcow2 +20000M
	I0327 16:54:03.908992   10611 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 16:54:03.909009   10611 main.go:141] libmachine: STDERR: 
	I0327 16:54:03.909024   10611 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/embed-certs-201000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/embed-certs-201000/disk.qcow2
	I0327 16:54:03.909029   10611 main.go:141] libmachine: Starting QEMU VM...
	I0327 16:54:03.909082   10611 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/embed-certs-201000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/embed-certs-201000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/embed-certs-201000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:28:ff:7c:35:33 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/embed-certs-201000/disk.qcow2
	I0327 16:54:03.910877   10611 main.go:141] libmachine: STDOUT: 
	I0327 16:54:03.910897   10611 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:54:03.910915   10611 client.go:171] duration metric: took 444.454166ms to LocalClient.Create
	I0327 16:54:05.913068   10611 start.go:128] duration metric: took 2.472592583s to createHost
	I0327 16:54:05.913166   10611 start.go:83] releasing machines lock for "embed-certs-201000", held for 2.472714125s
	W0327 16:54:05.913239   10611 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:54:05.922454   10611 out.go:177] * Deleting "embed-certs-201000" in qemu2 ...
	W0327 16:54:05.961247   10611 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:54:05.961280   10611 start.go:728] Will try again in 5 seconds ...
	I0327 16:54:10.963324   10611 start.go:360] acquireMachinesLock for embed-certs-201000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:54:10.963681   10611 start.go:364] duration metric: took 259.625µs to acquireMachinesLock for "embed-certs-201000"
	I0327 16:54:10.963844   10611 start.go:93] Provisioning new machine with config: &{Name:embed-certs-201000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.3 ClusterName:embed-certs-201000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0327 16:54:10.964087   10611 start.go:125] createHost starting for "" (driver="qemu2")
	I0327 16:54:10.973867   10611 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0327 16:54:11.022629   10611 start.go:159] libmachine.API.Create for "embed-certs-201000" (driver="qemu2")
	I0327 16:54:11.022685   10611 client.go:168] LocalClient.Create starting
	I0327 16:54:11.022773   10611 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/ca.pem
	I0327 16:54:11.022822   10611 main.go:141] libmachine: Decoding PEM data...
	I0327 16:54:11.022844   10611 main.go:141] libmachine: Parsing certificate...
	I0327 16:54:11.022908   10611 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18485-6511/.minikube/certs/cert.pem
	I0327 16:54:11.022935   10611 main.go:141] libmachine: Decoding PEM data...
	I0327 16:54:11.022947   10611 main.go:141] libmachine: Parsing certificate...
	I0327 16:54:11.023445   10611 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18485-6511/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso...
	I0327 16:54:11.174005   10611 main.go:141] libmachine: Creating SSH key...
	I0327 16:54:11.426925   10611 main.go:141] libmachine: Creating Disk image...
	I0327 16:54:11.426933   10611 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0327 16:54:11.427123   10611 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/embed-certs-201000/disk.qcow2.raw /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/embed-certs-201000/disk.qcow2
	I0327 16:54:11.439757   10611 main.go:141] libmachine: STDOUT: 
	I0327 16:54:11.439781   10611 main.go:141] libmachine: STDERR: 
	I0327 16:54:11.439833   10611 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/embed-certs-201000/disk.qcow2 +20000M
	I0327 16:54:11.450938   10611 main.go:141] libmachine: STDOUT: Image resized.
	
	I0327 16:54:11.450954   10611 main.go:141] libmachine: STDERR: 
	I0327 16:54:11.450965   10611 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/embed-certs-201000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/embed-certs-201000/disk.qcow2
	I0327 16:54:11.450977   10611 main.go:141] libmachine: Starting QEMU VM...
	I0327 16:54:11.451016   10611 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/embed-certs-201000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/embed-certs-201000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/embed-certs-201000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:cb:76:a9:9c:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/embed-certs-201000/disk.qcow2
	I0327 16:54:11.452706   10611 main.go:141] libmachine: STDOUT: 
	I0327 16:54:11.452723   10611 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:54:11.452734   10611 client.go:171] duration metric: took 430.057041ms to LocalClient.Create
	I0327 16:54:13.454817   10611 start.go:128] duration metric: took 2.490777541s to createHost
	I0327 16:54:13.454867   10611 start.go:83] releasing machines lock for "embed-certs-201000", held for 2.491244542s
	W0327 16:54:13.455074   10611 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-201000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-201000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:54:13.467175   10611 out.go:177] 
	W0327 16:54:13.471135   10611 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 16:54:13.471159   10611 out.go:239] * 
	* 
	W0327 16:54:13.472604   10611 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 16:54:13.482077   10611 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-201000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.29.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-201000 -n embed-certs-201000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-201000 -n embed-certs-201000: exit status 7 (60.794416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-201000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-791000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-791000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0-beta.0: exit status 80 (5.309755708s)

                                                
                                                
-- stdout --
	* [newest-cni-791000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-791000" primary control-plane node in "newest-cni-791000" cluster
	* Restarting existing qemu2 VM for "newest-cni-791000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-791000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:54:13.252872   10664 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:54:13.253001   10664 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:54:13.253004   10664 out.go:304] Setting ErrFile to fd 2...
	I0327 16:54:13.253006   10664 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:54:13.253135   10664 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:54:13.254235   10664 out.go:298] Setting JSON to false
	I0327 16:54:13.270162   10664 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6824,"bootTime":1711576829,"procs":486,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 16:54:13.270232   10664 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 16:54:13.274864   10664 out.go:177] * [newest-cni-791000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 16:54:13.283082   10664 out.go:177]   - MINIKUBE_LOCATION=18485
	I0327 16:54:13.286921   10664 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:54:13.283081   10664 notify.go:220] Checking for updates...
	I0327 16:54:13.290040   10664 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 16:54:13.293028   10664 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 16:54:13.295981   10664 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	I0327 16:54:13.299033   10664 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 16:54:13.302332   10664 config.go:182] Loaded profile config "newest-cni-791000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0-beta.0
	I0327 16:54:13.302578   10664 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 16:54:13.305995   10664 out.go:177] * Using the qemu2 driver based on existing profile
	I0327 16:54:13.312999   10664 start.go:297] selected driver: qemu2
	I0327 16:54:13.313004   10664 start.go:901] validating driver "qemu2" against &{Name:newest-cni-791000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0-beta.0 ClusterName:newest-cni-791000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expos
edPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:54:13.313055   10664 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 16:54:13.315352   10664 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0327 16:54:13.315399   10664 cni.go:84] Creating CNI manager for ""
	I0327 16:54:13.315405   10664 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 16:54:13.315428   10664 start.go:340] cluster config:
	{Name:newest-cni-791000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:newest-cni-791000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:54:13.319749   10664 iso.go:125] acquiring lock: {Name:mk26a7f41e004242919e6d5076ec0f9645f820ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:54:13.328018   10664 out.go:177] * Starting "newest-cni-791000" primary control-plane node in "newest-cni-791000" cluster
	I0327 16:54:13.332047   10664 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime docker
	I0327 16:54:13.332062   10664 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0327 16:54:13.332075   10664 cache.go:56] Caching tarball of preloaded images
	I0327 16:54:13.332133   10664 preload.go:173] Found /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 16:54:13.332138   10664 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-beta.0 on docker
	I0327 16:54:13.332203   10664 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/newest-cni-791000/config.json ...
	I0327 16:54:13.332714   10664 start.go:360] acquireMachinesLock for newest-cni-791000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:54:13.454932   10664 start.go:364] duration metric: took 122.201875ms to acquireMachinesLock for "newest-cni-791000"
	I0327 16:54:13.454962   10664 start.go:96] Skipping create...Using existing machine configuration
	I0327 16:54:13.454977   10664 fix.go:54] fixHost starting: 
	I0327 16:54:13.455291   10664 fix.go:112] recreateIfNeeded on newest-cni-791000: state=Stopped err=<nil>
	W0327 16:54:13.455312   10664 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 16:54:13.467164   10664 out.go:177] * Restarting existing qemu2 VM for "newest-cni-791000" ...
	I0327 16:54:13.471159   10664 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/newest-cni-791000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/newest-cni-791000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/newest-cni-791000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:af:70:63:c0:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/newest-cni-791000/disk.qcow2
	I0327 16:54:13.476874   10664 main.go:141] libmachine: STDOUT: 
	I0327 16:54:13.476929   10664 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:54:13.477008   10664 fix.go:56] duration metric: took 22.030375ms for fixHost
	I0327 16:54:13.477021   10664 start.go:83] releasing machines lock for "newest-cni-791000", held for 22.073333ms
	W0327 16:54:13.477048   10664 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 16:54:13.477151   10664 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:54:13.477164   10664 start.go:728] Will try again in 5 seconds ...
	I0327 16:54:18.479143   10664 start.go:360] acquireMachinesLock for newest-cni-791000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:54:18.479502   10664 start.go:364] duration metric: took 270.916µs to acquireMachinesLock for "newest-cni-791000"
	I0327 16:54:18.479563   10664 start.go:96] Skipping create...Using existing machine configuration
	I0327 16:54:18.479578   10664 fix.go:54] fixHost starting: 
	I0327 16:54:18.480258   10664 fix.go:112] recreateIfNeeded on newest-cni-791000: state=Stopped err=<nil>
	W0327 16:54:18.480291   10664 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 16:54:18.484692   10664 out.go:177] * Restarting existing qemu2 VM for "newest-cni-791000" ...
	I0327 16:54:18.488871   10664 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/newest-cni-791000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/newest-cni-791000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/newest-cni-791000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:af:70:63:c0:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/newest-cni-791000/disk.qcow2
	I0327 16:54:18.498920   10664 main.go:141] libmachine: STDOUT: 
	I0327 16:54:18.499003   10664 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:54:18.499095   10664 fix.go:56] duration metric: took 19.517792ms for fixHost
	I0327 16:54:18.499118   10664 start.go:83] releasing machines lock for "newest-cni-791000", held for 19.5905ms
	W0327 16:54:18.499331   10664 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-791000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-791000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:54:18.505671   10664 out.go:177] 
	W0327 16:54:18.509741   10664 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 16:54:18.509759   10664 out.go:239] * 
	* 
	W0327 16:54:18.511539   10664 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 16:54:18.518669   10664 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-791000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-791000 -n newest-cni-791000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-791000 -n newest-cni-791000: exit status 7 (66.95325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-791000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-201000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-201000 create -f testdata/busybox.yaml: exit status 1 (28.426042ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-201000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-201000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-201000 -n embed-certs-201000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-201000 -n embed-certs-201000: exit status 7 (30.784375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-201000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-201000 -n embed-certs-201000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-201000 -n embed-certs-201000: exit status 7 (30.915042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-201000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-201000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-201000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-201000 describe deploy/metrics-server -n kube-system: exit status 1 (26.637042ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-201000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-201000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-201000 -n embed-certs-201000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-201000 -n embed-certs-201000: exit status 7 (31.080958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-201000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-201000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.29.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-201000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.29.3: exit status 80 (5.183940667s)

                                                
                                                
-- stdout --
	* [embed-certs-201000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-201000" primary control-plane node in "embed-certs-201000" cluster
	* Restarting existing qemu2 VM for "embed-certs-201000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-201000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:54:17.693584   10707 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:54:17.693748   10707 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:54:17.693752   10707 out.go:304] Setting ErrFile to fd 2...
	I0327 16:54:17.693754   10707 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:54:17.693881   10707 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:54:17.694838   10707 out.go:298] Setting JSON to false
	I0327 16:54:17.711012   10707 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":6828,"bootTime":1711576829,"procs":486,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 16:54:17.711070   10707 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 16:54:17.716289   10707 out.go:177] * [embed-certs-201000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 16:54:17.724210   10707 out.go:177]   - MINIKUBE_LOCATION=18485
	I0327 16:54:17.724255   10707 notify.go:220] Checking for updates...
	I0327 16:54:17.728527   10707 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:54:17.731217   10707 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 16:54:17.734228   10707 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 16:54:17.737240   10707 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	I0327 16:54:17.740223   10707 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 16:54:17.743543   10707 config.go:182] Loaded profile config "embed-certs-201000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:54:17.743815   10707 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 16:54:17.748181   10707 out.go:177] * Using the qemu2 driver based on existing profile
	I0327 16:54:17.755207   10707 start.go:297] selected driver: qemu2
	I0327 16:54:17.755215   10707 start.go:901] validating driver "qemu2" against &{Name:embed-certs-201000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:embed-certs-201000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:54:17.755280   10707 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 16:54:17.757610   10707 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0327 16:54:17.757654   10707 cni.go:84] Creating CNI manager for ""
	I0327 16:54:17.757670   10707 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 16:54:17.757693   10707 start.go:340] cluster config:
	{Name:embed-certs-201000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:embed-certs-201000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:54:17.762043   10707 iso.go:125] acquiring lock: {Name:mk26a7f41e004242919e6d5076ec0f9645f820ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:54:17.769213   10707 out.go:177] * Starting "embed-certs-201000" primary control-plane node in "embed-certs-201000" cluster
	I0327 16:54:17.773235   10707 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 16:54:17.773249   10707 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 16:54:17.773258   10707 cache.go:56] Caching tarball of preloaded images
	I0327 16:54:17.773321   10707 preload.go:173] Found /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0327 16:54:17.773326   10707 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 16:54:17.773399   10707 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/embed-certs-201000/config.json ...
	I0327 16:54:17.773898   10707 start.go:360] acquireMachinesLock for embed-certs-201000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:54:17.773924   10707 start.go:364] duration metric: took 19.25µs to acquireMachinesLock for "embed-certs-201000"
	I0327 16:54:17.773933   10707 start.go:96] Skipping create...Using existing machine configuration
	I0327 16:54:17.773939   10707 fix.go:54] fixHost starting: 
	I0327 16:54:17.774060   10707 fix.go:112] recreateIfNeeded on embed-certs-201000: state=Stopped err=<nil>
	W0327 16:54:17.774069   10707 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 16:54:17.782204   10707 out.go:177] * Restarting existing qemu2 VM for "embed-certs-201000" ...
	I0327 16:54:17.785159   10707 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/embed-certs-201000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/embed-certs-201000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/embed-certs-201000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:cb:76:a9:9c:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/embed-certs-201000/disk.qcow2
	I0327 16:54:17.787135   10707 main.go:141] libmachine: STDOUT: 
	I0327 16:54:17.787155   10707 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:54:17.787186   10707 fix.go:56] duration metric: took 13.2475ms for fixHost
	I0327 16:54:17.787192   10707 start.go:83] releasing machines lock for "embed-certs-201000", held for 13.264417ms
	W0327 16:54:17.787199   10707 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 16:54:17.787235   10707 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:54:17.787240   10707 start.go:728] Will try again in 5 seconds ...
	I0327 16:54:22.789266   10707 start.go:360] acquireMachinesLock for embed-certs-201000: {Name:mk6b67ca98ff36f470ae98389325f11bd4950dc8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0327 16:54:22.789762   10707 start.go:364] duration metric: took 403µs to acquireMachinesLock for "embed-certs-201000"
	I0327 16:54:22.789913   10707 start.go:96] Skipping create...Using existing machine configuration
	I0327 16:54:22.789982   10707 fix.go:54] fixHost starting: 
	I0327 16:54:22.790749   10707 fix.go:112] recreateIfNeeded on embed-certs-201000: state=Stopped err=<nil>
	W0327 16:54:22.790778   10707 fix.go:138] unexpected machine state, will restart: <nil>
	I0327 16:54:22.795880   10707 out.go:177] * Restarting existing qemu2 VM for "embed-certs-201000" ...
	I0327 16:54:22.803979   10707 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/embed-certs-201000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18485-6511/.minikube/machines/embed-certs-201000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/embed-certs-201000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:cb:76:a9:9c:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18485-6511/.minikube/machines/embed-certs-201000/disk.qcow2
	I0327 16:54:22.814847   10707 main.go:141] libmachine: STDOUT: 
	I0327 16:54:22.814937   10707 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0327 16:54:22.815058   10707 fix.go:56] duration metric: took 25.117917ms for fixHost
	I0327 16:54:22.815084   10707 start.go:83] releasing machines lock for "embed-certs-201000", held for 25.296125ms
	W0327 16:54:22.815350   10707 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-201000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-201000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0327 16:54:22.823773   10707 out.go:177] 
	W0327 16:54:22.826826   10707 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0327 16:54:22.826862   10707 out.go:239] * 
	* 
	W0327 16:54:22.829340   10707 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 16:54:22.837842   10707 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-201000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.29.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-201000 -n embed-certs-201000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-201000 -n embed-certs-201000: exit status 7 (70.710583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-201000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-791000 image list --format=json
start_stop_delete_test.go:304: v1.30.0-beta.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.30.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.30.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.30.0-beta.0",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-791000 -n newest-cni-791000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-791000 -n newest-cni-791000: exit status 7 (30.781ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-791000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-791000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-791000 --alsologtostderr -v=1: exit status 83 (41.516833ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-791000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-791000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:54:18.705246   10721 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:54:18.705394   10721 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:54:18.705398   10721 out.go:304] Setting ErrFile to fd 2...
	I0327 16:54:18.705400   10721 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:54:18.705513   10721 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:54:18.705738   10721 out.go:298] Setting JSON to false
	I0327 16:54:18.705747   10721 mustload.go:65] Loading cluster: newest-cni-791000
	I0327 16:54:18.705950   10721 config.go:182] Loaded profile config "newest-cni-791000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0-beta.0
	I0327 16:54:18.709189   10721 out.go:177] * The control-plane node newest-cni-791000 host is not running: state=Stopped
	I0327 16:54:18.713167   10721 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-791000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-791000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-791000 -n newest-cni-791000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-791000 -n newest-cni-791000: exit status 7 (30.882125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-791000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-791000 -n newest-cni-791000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-791000 -n newest-cni-791000: exit status 7 (31.056292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-791000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-201000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-201000 -n embed-certs-201000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-201000 -n embed-certs-201000: exit status 7 (34.112834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-201000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-201000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-201000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-201000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.257625ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-201000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-201000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-201000 -n embed-certs-201000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-201000 -n embed-certs-201000: exit status 7 (32.594042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-201000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-201000 image list --format=json
start_stop_delete_test.go:304: v1.29.3 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.29.3",
- 	"registry.k8s.io/kube-controller-manager:v1.29.3",
- 	"registry.k8s.io/kube-proxy:v1.29.3",
- 	"registry.k8s.io/kube-scheduler:v1.29.3",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-201000 -n embed-certs-201000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-201000 -n embed-certs-201000: exit status 7 (31.853917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-201000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-201000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-201000 --alsologtostderr -v=1: exit status 83 (43.416542ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-201000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-201000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:54:23.118968   10756 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:54:23.119119   10756 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:54:23.119122   10756 out.go:304] Setting ErrFile to fd 2...
	I0327 16:54:23.119124   10756 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:54:23.119250   10756 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:54:23.119456   10756 out.go:298] Setting JSON to false
	I0327 16:54:23.119465   10756 mustload.go:65] Loading cluster: embed-certs-201000
	I0327 16:54:23.119639   10756 config.go:182] Loaded profile config "embed-certs-201000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:54:23.123371   10756 out.go:177] * The control-plane node embed-certs-201000 host is not running: state=Stopped
	I0327 16:54:23.127360   10756 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-201000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-201000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-201000 -n embed-certs-201000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-201000 -n embed-certs-201000: exit status 7 (32.104208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-201000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-201000 -n embed-certs-201000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-201000 -n embed-certs-201000: exit status 7 (31.854625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-201000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    

Test pass (86/266)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.24
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.23
12 TestDownloadOnly/v1.29.3/json-events 22.27
13 TestDownloadOnly/v1.29.3/preload-exists 0
16 TestDownloadOnly/v1.29.3/kubectl 0
17 TestDownloadOnly/v1.29.3/LogsDuration 0.11
18 TestDownloadOnly/v1.29.3/DeleteAll 0.24
19 TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds 0.23
21 TestDownloadOnly/v1.30.0-beta.0/json-events 23.9
22 TestDownloadOnly/v1.30.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.30.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.30.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.30.0-beta.0/DeleteAll 0.23
28 TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds 0.22
30 TestBinaryMirror 0.35
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
44 TestHyperKitDriverInstallOrUpdate 9.57
48 TestErrorSpam/start 0.39
49 TestErrorSpam/status 0.1
50 TestErrorSpam/pause 0.12
51 TestErrorSpam/unpause 0.13
52 TestErrorSpam/stop 7.47
55 TestFunctional/serial/CopySyncFile 0
57 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/CacheCmd/cache/add_remote 6.22
64 TestFunctional/serial/CacheCmd/cache/add_local 1.19
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.04
69 TestFunctional/serial/CacheCmd/cache/delete 0.07
78 TestFunctional/parallel/ConfigCmd 0.23
80 TestFunctional/parallel/DryRun 0.28
81 TestFunctional/parallel/InternationalLanguage 0.11
87 TestFunctional/parallel/AddonsCmd 0.12
102 TestFunctional/parallel/License 1.43
103 TestFunctional/parallel/Version/short 0.04
110 TestFunctional/parallel/ImageCommands/Setup 5.55
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.07
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.13
134 TestFunctional/parallel/ProfileCmd/profile_not_create 0.14
135 TestFunctional/parallel/ProfileCmd/profile_list 0.11
136 TestFunctional/parallel/ProfileCmd/profile_json_output 0.11
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.16
144 TestFunctional/delete_addon-resizer_images 0.17
145 TestFunctional/delete_my-image_image 0.04
146 TestFunctional/delete_minikube_cached_images 0.04
175 TestJSONOutput/start/Audit 0
177 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/pause/Audit 0
183 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/unpause/Audit 0
189 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/stop/Command 3.49
193 TestJSONOutput/stop/Audit 0
195 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
197 TestErrorJSONOutput 0.32
202 TestMainNoArgs 0.04
249 TestStoppedBinaryUpgrade/Setup 4.92
261 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
265 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
266 TestNoKubernetes/serial/ProfileList 31.48
267 TestNoKubernetes/serial/Stop 2.08
269 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.05
279 TestStoppedBinaryUpgrade/MinikubeLogs 0.73
284 TestStartStop/group/old-k8s-version/serial/Stop 3.53
285 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.1
297 TestStartStop/group/no-preload/serial/Stop 3.44
300 TestStartStop/group/default-k8s-diff-port/serial/Stop 1.99
301 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
303 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.13
317 TestStartStop/group/newest-cni/serial/DeployApp 0
318 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
319 TestStartStop/group/newest-cni/serial/Stop 3.21
320 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
324 TestStartStop/group/embed-certs/serial/Stop 3.76
325 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.13
327 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
328 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-614000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-614000: exit status 85 (101.341083ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-614000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:26 PDT |          |
	|         | -p download-only-614000        |                      |         |                |                     |          |
	|         | --force --alsologtostderr      |                      |         |                |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |          |
	|         | --container-runtime=docker     |                      |         |                |                     |          |
	|         | --driver=qemu2                 |                      |         |                |                     |          |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 16:26:18
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 16:26:18.627892    6928 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:26:18.628039    6928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:26:18.628042    6928 out.go:304] Setting ErrFile to fd 2...
	I0327 16:26:18.628045    6928 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:26:18.628166    6928 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	W0327 16:26:18.628250    6928 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18485-6511/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18485-6511/.minikube/config/config.json: no such file or directory
	I0327 16:26:18.629493    6928 out.go:298] Setting JSON to true
	I0327 16:26:18.647943    6928 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5149,"bootTime":1711576829,"procs":486,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 16:26:18.648010    6928 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 16:26:18.653262    6928 out.go:97] [download-only-614000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 16:26:18.656422    6928 out.go:169] MINIKUBE_LOCATION=18485
	I0327 16:26:18.653417    6928 notify.go:220] Checking for updates...
	W0327 16:26:18.653469    6928 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball: no such file or directory
	I0327 16:26:18.664289    6928 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:26:18.667435    6928 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 16:26:18.670452    6928 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 16:26:18.673467    6928 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	W0327 16:26:18.679407    6928 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0327 16:26:18.679589    6928 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 16:26:18.684424    6928 out.go:97] Using the qemu2 driver based on user configuration
	I0327 16:26:18.684449    6928 start.go:297] selected driver: qemu2
	I0327 16:26:18.684452    6928 start.go:901] validating driver "qemu2" against <nil>
	I0327 16:26:18.684512    6928 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 16:26:18.688397    6928 out.go:169] Automatically selected the socket_vmnet network
	I0327 16:26:18.693958    6928 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0327 16:26:18.694055    6928 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0327 16:26:18.694118    6928 cni.go:84] Creating CNI manager for ""
	I0327 16:26:18.694133    6928 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0327 16:26:18.694177    6928 start.go:340] cluster config:
	{Name:download-only-614000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-614000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:26:18.699722    6928 iso.go:125] acquiring lock: {Name:mk26a7f41e004242919e6d5076ec0f9645f820ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:26:18.702490    6928 out.go:97] Downloading VM boot image ...
	I0327 16:26:18.702526    6928 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/iso/arm64/minikube-v1.33.0-1711559712-18485-arm64.iso
	I0327 16:26:36.232212    6928 out.go:97] Starting "download-only-614000" primary control-plane node in "download-only-614000" cluster
	I0327 16:26:36.232249    6928 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0327 16:26:36.532427    6928 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0327 16:26:36.532512    6928 cache.go:56] Caching tarball of preloaded images
	I0327 16:26:36.533315    6928 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0327 16:26:36.538877    6928 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0327 16:26:36.538903    6928 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0327 16:26:37.138723    6928 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0327 16:26:56.642228    6928 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0327 16:26:56.642386    6928 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0327 16:26:57.340177    6928 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0327 16:26:57.340380    6928 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/download-only-614000/config.json ...
	I0327 16:26:57.340398    6928 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/download-only-614000/config.json: {Name:mke2e2a697368fdeba8c536035210c569c1c16cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 16:26:57.340635    6928 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0327 16:26:57.340822    6928 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0327 16:26:57.691866    6928 out.go:169] 
	W0327 16:26:57.696017    6928 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18485-6511/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108bf3220 0x108bf3220 0x108bf3220 0x108bf3220 0x108bf3220 0x108bf3220 0x108bf3220] Decompressors:map[bz2:0x140007cfb10 gz:0x140007cfb18 tar:0x140007cfac0 tar.bz2:0x140007cfad0 tar.gz:0x140007cfae0 tar.xz:0x140007cfaf0 tar.zst:0x140007cfb00 tbz2:0x140007cfad0 tgz:0x140007cfae0 txz:0x140007cfaf0 tzst:0x140007cfb00 xz:0x140007cfb20 zip:0x140007cfb30 zst:0x140007cfb28] Getters:map[file:0x14002188560 http:0x140008c2960 https:0x140008c29b0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0327 16:26:57.696044    6928 out_reason.go:110] 
	W0327 16:26:57.702936    6928 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0327 16:26:57.706900    6928 out.go:169] 
	
	
	* The control-plane node download-only-614000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-614000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-614000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/json-events (22.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-652000 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-652000 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=docker --driver=qemu2 : (22.268814583s)
--- PASS: TestDownloadOnly/v1.29.3/json-events (22.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/preload-exists
--- PASS: TestDownloadOnly/v1.29.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/kubectl
--- PASS: TestDownloadOnly/v1.29.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/LogsDuration (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-652000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-652000: exit status 85 (114.103459ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-614000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:26 PDT |                     |
	|         | -p download-only-614000        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |                |                     |                     |
	|         | --container-runtime=docker     |                      |         |                |                     |                     |
	|         | --driver=qemu2                 |                      |         |                |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:26 PDT | 27 Mar 24 16:26 PDT |
	| delete  | -p download-only-614000        | download-only-614000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:26 PDT | 27 Mar 24 16:26 PDT |
	| start   | -o=json --download-only        | download-only-652000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:26 PDT |                     |
	|         | -p download-only-652000        |                      |         |                |                     |                     |
	|         | --force --alsologtostderr      |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3   |                      |         |                |                     |                     |
	|         | --container-runtime=docker     |                      |         |                |                     |                     |
	|         | --driver=qemu2                 |                      |         |                |                     |                     |
	|---------|--------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 16:26:58
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 16:26:58.389024    6965 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:26:58.389153    6965 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:26:58.389162    6965 out.go:304] Setting ErrFile to fd 2...
	I0327 16:26:58.389166    6965 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:26:58.389290    6965 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:26:58.390310    6965 out.go:298] Setting JSON to true
	I0327 16:26:58.406457    6965 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5189,"bootTime":1711576829,"procs":472,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 16:26:58.406517    6965 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 16:26:58.411615    6965 out.go:97] [download-only-652000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 16:26:58.415533    6965 out.go:169] MINIKUBE_LOCATION=18485
	I0327 16:26:58.411732    6965 notify.go:220] Checking for updates...
	I0327 16:26:58.423496    6965 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:26:58.426572    6965 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 16:26:58.429600    6965 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 16:26:58.432547    6965 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	W0327 16:26:58.438599    6965 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0327 16:26:58.438783    6965 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 16:26:58.442550    6965 out.go:97] Using the qemu2 driver based on user configuration
	I0327 16:26:58.442559    6965 start.go:297] selected driver: qemu2
	I0327 16:26:58.442563    6965 start.go:901] validating driver "qemu2" against <nil>
	I0327 16:26:58.442649    6965 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 16:26:58.445543    6965 out.go:169] Automatically selected the socket_vmnet network
	I0327 16:26:58.450832    6965 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0327 16:26:58.450930    6965 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0327 16:26:58.450973    6965 cni.go:84] Creating CNI manager for ""
	I0327 16:26:58.450982    6965 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 16:26:58.450987    6965 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 16:26:58.451032    6965 start.go:340] cluster config:
	{Name:download-only-652000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:download-only-652000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:26:58.455408    6965 iso.go:125] acquiring lock: {Name:mk26a7f41e004242919e6d5076ec0f9645f820ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:26:58.458592    6965 out.go:97] Starting "download-only-652000" primary control-plane node in "download-only-652000" cluster
	I0327 16:26:58.458600    6965 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 16:26:59.561806    6965 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 16:26:59.561850    6965 cache.go:56] Caching tarball of preloaded images
	I0327 16:26:59.562616    6965 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 16:26:59.568245    6965 out.go:97] Downloading Kubernetes v1.29.3 preload ...
	I0327 16:26:59.568268    6965 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 ...
	I0327 16:27:00.197248    6965 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4?checksum=md5:c0bb0715201da444334d968c298f45eb -> /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0327 16:27:16.391061    6965 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 ...
	I0327 16:27:16.391235    6965 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 ...
	I0327 16:27:16.947945    6965 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0327 16:27:16.948137    6965 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/download-only-652000/config.json ...
	I0327 16:27:16.948154    6965 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/download-only-652000/config.json: {Name:mkf6281b73277d5076494a6308d8822779c949ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 16:27:16.948411    6965 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0327 16:27:16.948532    6965 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/darwin/arm64/v1.29.3/kubectl
	
	
	* The control-plane node download-only-652000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-652000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.3/LogsDuration (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.29.3/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-652000
--- PASS: TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/json-events (23.9s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-236000 --force --alsologtostderr --kubernetes-version=v1.30.0-beta.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-236000 --force --alsologtostderr --kubernetes-version=v1.30.0-beta.0 --container-runtime=docker --driver=qemu2 : (23.899917833s)
--- PASS: TestDownloadOnly/v1.30.0-beta.0/json-events (23.90s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.30.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-236000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-236000: exit status 85 (82.030917ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   |    Version     |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-614000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:26 PDT |                     |
	|         | -p download-only-614000             |                      |         |                |                     |                     |
	|         | --force --alsologtostderr           |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |                |                     |                     |
	|         | --container-runtime=docker          |                      |         |                |                     |                     |
	|         | --driver=qemu2                      |                      |         |                |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:26 PDT | 27 Mar 24 16:26 PDT |
	| delete  | -p download-only-614000             | download-only-614000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:26 PDT | 27 Mar 24 16:26 PDT |
	| start   | -o=json --download-only             | download-only-652000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:26 PDT |                     |
	|         | -p download-only-652000             |                      |         |                |                     |                     |
	|         | --force --alsologtostderr           |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.29.3        |                      |         |                |                     |                     |
	|         | --container-runtime=docker          |                      |         |                |                     |                     |
	|         | --driver=qemu2                      |                      |         |                |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:27 PDT | 27 Mar 24 16:27 PDT |
	| delete  | -p download-only-652000             | download-only-652000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:27 PDT | 27 Mar 24 16:27 PDT |
	| start   | -o=json --download-only             | download-only-236000 | jenkins | v1.33.0-beta.0 | 27 Mar 24 16:27 PDT |                     |
	|         | -p download-only-236000             |                      |         |                |                     |                     |
	|         | --force --alsologtostderr           |                      |         |                |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0 |                      |         |                |                     |                     |
	|         | --container-runtime=docker          |                      |         |                |                     |                     |
	|         | --driver=qemu2                      |                      |         |                |                     |                     |
	|---------|-------------------------------------|----------------------|---------|----------------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/27 16:27:21
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0327 16:27:21.238715    7000 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:27:21.238828    7000 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:27:21.238831    7000 out.go:304] Setting ErrFile to fd 2...
	I0327 16:27:21.238833    7000 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:27:21.238980    7000 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:27:21.240041    7000 out.go:298] Setting JSON to true
	I0327 16:27:21.256147    7000 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5212,"bootTime":1711576829,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 16:27:21.256231    7000 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 16:27:21.261323    7000 out.go:97] [download-only-236000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 16:27:21.265289    7000 out.go:169] MINIKUBE_LOCATION=18485
	I0327 16:27:21.261398    7000 notify.go:220] Checking for updates...
	I0327 16:27:21.273068    7000 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:27:21.276264    7000 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 16:27:21.279342    7000 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 16:27:21.282324    7000 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	W0327 16:27:21.288256    7000 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0327 16:27:21.288393    7000 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 16:27:21.291308    7000 out.go:97] Using the qemu2 driver based on user configuration
	I0327 16:27:21.291319    7000 start.go:297] selected driver: qemu2
	I0327 16:27:21.291323    7000 start.go:901] validating driver "qemu2" against <nil>
	I0327 16:27:21.291387    7000 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0327 16:27:21.294268    7000 out.go:169] Automatically selected the socket_vmnet network
	I0327 16:27:21.299439    7000 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0327 16:27:21.299539    7000 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0327 16:27:21.299582    7000 cni.go:84] Creating CNI manager for ""
	I0327 16:27:21.299589    7000 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0327 16:27:21.299603    7000 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0327 16:27:21.299639    7000 start.go:340] cluster config:
	{Name:download-only-236000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:download-only-236000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet St
aticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:27:21.304023    7000 iso.go:125] acquiring lock: {Name:mk26a7f41e004242919e6d5076ec0f9645f820ad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0327 16:27:21.307353    7000 out.go:97] Starting "download-only-236000" primary control-plane node in "download-only-236000" cluster
	I0327 16:27:21.307363    7000 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime docker
	I0327 16:27:22.431083    7000 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-beta.0/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0327 16:27:22.431188    7000 cache.go:56] Caching tarball of preloaded images
	I0327 16:27:22.432008    7000 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime docker
	I0327 16:27:22.436565    7000 out.go:97] Downloading Kubernetes v1.30.0-beta.0 preload ...
	I0327 16:27:22.436615    7000 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0327 16:27:23.013128    7000 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-beta.0/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4?checksum=md5:e2591d3d8d44bfdea8fdcdf9682f34bf -> /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0327 16:27:39.581099    7000 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0327 16:27:39.581261    7000 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0327 16:27:40.125110    7000 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-beta.0 on docker
	I0327 16:27:40.125320    7000 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/download-only-236000/config.json ...
	I0327 16:27:40.125336    7000 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18485-6511/.minikube/profiles/download-only-236000/config.json: {Name:mk915d360a7bcddcb9ef16fb0131b9afe96541c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0327 16:27:40.125589    7000 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime docker
	I0327 16:27:40.125708    7000 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0-beta.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0-beta.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18485-6511/.minikube/cache/darwin/arm64/v1.30.0-beta.0/kubectl
	
	
	* The control-plane node download-only-236000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-236000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.0-beta.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-236000
--- PASS: TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestBinaryMirror (0.35s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-029000 --alsologtostderr --binary-mirror http://127.0.0.1:50984 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-029000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-029000
--- PASS: TestBinaryMirror (0.35s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-295000
addons_test.go:928: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-295000: exit status 85 (56.368917ms)

                                                
                                                
-- stdout --
	* Profile "addons-295000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-295000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-295000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-295000: exit status 85 (60.115958ms)

                                                
                                                
-- stdout --
	* Profile "addons-295000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-295000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (9.57s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (9.57s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-432000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-432000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-432000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-432000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-432000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000 status: exit status 7 (32.58875ms)

                                                
                                                
-- stdout --
	nospam-432000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-432000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-432000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-432000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000 status: exit status 7 (31.390083ms)

                                                
                                                
-- stdout --
	nospam-432000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-432000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-432000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-432000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000 status: exit status 7 (31.039583ms)

                                                
                                                
-- stdout --
	nospam-432000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-432000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.10s)

                                                
                                    
x
+
TestErrorSpam/pause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-432000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-432000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000 pause: exit status 83 (39.130917ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-432000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-432000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-432000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-432000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-432000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000 pause: exit status 83 (41.830625ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-432000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-432000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-432000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-432000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-432000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000 pause: exit status 83 (40.827125ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-432000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-432000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-432000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.12s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.13s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-432000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-432000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000 unpause: exit status 83 (42.705541ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-432000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-432000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-432000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-432000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-432000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000 unpause: exit status 83 (41.929542ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-432000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-432000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-432000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-432000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-432000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000 unpause: exit status 83 (41.645625ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-432000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-432000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-432000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.13s)

                                                
                                    
x
+
TestErrorSpam/stop (7.47s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-432000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-432000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000 stop: (2.036864417s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-432000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-432000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000 stop: (1.932091625s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-432000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-432000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-432000 stop: (3.494066666s)
--- PASS: TestErrorSpam/stop (7.47s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/18485-6511/.minikube/files/etc/test/nested/copy/6926/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (6.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-746000 cache add registry.k8s.io/pause:3.1: (2.215778875s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-746000 cache add registry.k8s.io/pause:3.3: (2.167548959s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-746000 cache add registry.k8s.io/pause:latest: (1.832896541s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (6.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-746000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local3234900516/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 cache add minikube-local-cache-test:functional-746000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 cache delete minikube-local-cache-test:functional-746000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-746000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 config get cpus: exit status 14 (32.853083ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 config get cpus: exit status 14 (34.58275ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-746000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-746000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (157.77275ms)

                                                
                                                
-- stdout --
	* [functional-746000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:29:40.090358    7613 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:29:40.090491    7613 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:29:40.090494    7613 out.go:304] Setting ErrFile to fd 2...
	I0327 16:29:40.090498    7613 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:29:40.090643    7613 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:29:40.091826    7613 out.go:298] Setting JSON to false
	I0327 16:29:40.110643    7613 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5351,"bootTime":1711576829,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 16:29:40.110712    7613 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 16:29:40.115848    7613 out.go:177] * [functional-746000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	I0327 16:29:40.122821    7613 out.go:177]   - MINIKUBE_LOCATION=18485
	I0327 16:29:40.126757    7613 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:29:40.122851    7613 notify.go:220] Checking for updates...
	I0327 16:29:40.130811    7613 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 16:29:40.133784    7613 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 16:29:40.136806    7613 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	I0327 16:29:40.139724    7613 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 16:29:40.143160    7613 config.go:182] Loaded profile config "functional-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:29:40.143427    7613 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 16:29:40.147782    7613 out.go:177] * Using the qemu2 driver based on existing profile
	I0327 16:29:40.154746    7613 start.go:297] selected driver: qemu2
	I0327 16:29:40.154752    7613 start.go:901] validating driver "qemu2" against &{Name:functional-746000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:functional-746000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:29:40.154816    7613 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 16:29:40.160678    7613 out.go:177] 
	W0327 16:29:40.164724    7613 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0327 16:29:40.168749    7613 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-746000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-746000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-746000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (112.254959ms)

                                                
                                                
-- stdout --
	* [functional-746000] minikube v1.33.0-beta.0 sur Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0327 16:29:40.323209    7624 out.go:291] Setting OutFile to fd 1 ...
	I0327 16:29:40.323316    7624 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:29:40.323319    7624 out.go:304] Setting ErrFile to fd 2...
	I0327 16:29:40.323322    7624 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0327 16:29:40.323451    7624 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18485-6511/.minikube/bin
	I0327 16:29:40.324844    7624 out.go:298] Setting JSON to false
	I0327 16:29:40.341415    7624 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":5351,"bootTime":1711576829,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0327 16:29:40.341499    7624 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0327 16:29:40.346781    7624 out.go:177] * [functional-746000] minikube v1.33.0-beta.0 sur Darwin 14.3.1 (arm64)
	I0327 16:29:40.353750    7624 out.go:177]   - MINIKUBE_LOCATION=18485
	I0327 16:29:40.353821    7624 notify.go:220] Checking for updates...
	I0327 16:29:40.357838    7624 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	I0327 16:29:40.360719    7624 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0327 16:29:40.363737    7624 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0327 16:29:40.366759    7624 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	I0327 16:29:40.369683    7624 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0327 16:29:40.373050    7624 config.go:182] Loaded profile config "functional-746000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0327 16:29:40.373308    7624 driver.go:392] Setting default libvirt URI to qemu:///system
	I0327 16:29:40.377725    7624 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0327 16:29:40.384788    7624 start.go:297] selected driver: qemu2
	I0327 16:29:40.384795    7624 start.go:901] validating driver "qemu2" against &{Name:functional-746000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18485/minikube-v1.33.0-1711559712-18485-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1711559786-18485@sha256:2dcab64da240d825290a528fa79ad3c32db45fe5f8be5150468234a7114eff82 Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:functional-746000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0327 16:29:40.384850    7624 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0327 16:29:40.390756    7624 out.go:177] 
	W0327 16:29:40.394730    7624 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0327 16:29:40.398749    7624 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
functional_test.go:2284: (dbg) Done: out/minikube-darwin-arm64 license: (1.425615083s)
--- PASS: TestFunctional/parallel/License (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (5.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (5.507960833s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-746000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (5.55s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-746000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 image rm gcr.io/google-containers/addon-resizer:functional-746000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-746000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 image save --daemon gcr.io/google-containers/addon-resizer:functional-746000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-746000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "74.058167ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "35.877208ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "72.889708ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "35.693708ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.014133916s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-746000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.16s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.17s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-746000
--- PASS: TestFunctional/delete_addon-resizer_images (0.17s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-746000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-746000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.49s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-483000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-483000 --output=json --user=testUser: (3.490611417s)
--- PASS: TestJSONOutput/stop/Command (3.49s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.32s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-226000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-226000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (95.930166ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"47091219-781b-49aa-b063-ad62b1fe8638","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-226000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7fec94cc-8405-43fe-b12f-b71e9cab6c4d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18485"}}
	{"specversion":"1.0","id":"08eba348-d841-47b5-922e-7a50da5e85a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig"}}
	{"specversion":"1.0","id":"c47fd620-5e2a-43a2-a8e3-af3007ed2db7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"dbedafe7-b1bd-4da1-9058-3e0b2e2151b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3de50b6e-cf40-427e-9c25-db28ec8d0774","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube"}}
	{"specversion":"1.0","id":"7d054084-3db3-4464-8147-fec8579fa0ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"86f66079-c3eb-4ec7-8e6c-db8269943877","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-226000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-226000
--- PASS: TestErrorJSONOutput (0.32s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (4.92s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (4.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-222000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-222000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (100.645666ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-222000] minikube v1.33.0-beta.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18485
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18485-6511/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18485-6511/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-222000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-222000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (44.8165ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-222000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-222000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.807369667s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.668204s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-222000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-222000: (2.083660084s)
--- PASS: TestNoKubernetes/serial/Stop (2.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-222000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-222000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (51.698959ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-222000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-222000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.73s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-017000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-386000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-386000 --alsologtostderr -v=3: (3.534573125s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-386000 -n old-k8s-version-386000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-386000 -n old-k8s-version-386000: exit status 7 (32.710958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-386000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-646000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-646000 --alsologtostderr -v=3: (3.438593125s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (1.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-415000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-415000 --alsologtostderr -v=3: (1.988991084s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (1.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-646000 -n no-preload-646000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-646000 -n no-preload-646000: exit status 7 (59.977792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-646000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-415000 -n default-k8s-diff-port-415000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-415000 -n default-k8s-diff-port-415000: exit status 7 (60.856083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-415000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-791000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-791000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-791000 --alsologtostderr -v=3: (3.208179s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-791000 -n newest-cni-791000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-791000 -n newest-cni-791000: exit status 7 (57.850042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-791000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-201000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-201000 --alsologtostderr -v=3: (3.761779583s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.76s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-201000 -n embed-certs-201000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-201000 -n embed-certs-201000: exit status 7 (57.949959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-201000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (24/266)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (11.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-746000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2202420337/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1711582142783962000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2202420337/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1711582142783962000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2202420337/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1711582142783962000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2202420337/001/test-1711582142783962000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (56.545375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.942458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.558916ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.876167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.350708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.145541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.217625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 ssh "sudo umount -f /mount-9p": exit status 83 (47.698042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-746000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-746000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port2202420337/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (11.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (13.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-746000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1247242043/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (63.7505ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.629ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.863833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.377333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.637084ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.327916ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.554ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (85.684833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 ssh "sudo umount -f /mount-9p": exit status 83 (45.672083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-746000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-746000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port1247242043/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (13.91s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (12.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-746000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup5085082/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-746000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup5085082/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-746000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup5085082/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 ssh "findmnt -T" /mount1: exit status 83 (86.075709ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 ssh "findmnt -T" /mount1: exit status 83 (86.494375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 ssh "findmnt -T" /mount1: exit status 83 (88.239542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 ssh "findmnt -T" /mount1: exit status 83 (88.977625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 ssh "findmnt -T" /mount1: exit status 83 (90.486958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 ssh "findmnt -T" /mount1: exit status 83 (87.842917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-746000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-746000 ssh "findmnt -T" /mount1: exit status 83 (91.286333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-746000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-746000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-746000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup5085082/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-746000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup5085082/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-746000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup5085082/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (12.03s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-244000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-244000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-244000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-244000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-244000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-244000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-244000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-244000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-244000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-244000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-244000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-244000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-244000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-244000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-244000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-244000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-244000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-244000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-244000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-244000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-244000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-244000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-244000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-244000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-244000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-244000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-244000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-244000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-244000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-244000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-244000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-244000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-244000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-244000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-244000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-244000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-244000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-244000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-244000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-244000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-244000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-244000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-244000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-244000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-244000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-244000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-244000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-244000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-244000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-244000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-244000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-244000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-244000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-244000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-244000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-244000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-244000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-244000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-244000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-244000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-244000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-244000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-244000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-244000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-244000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-244000"

                                                
                                                
----------------------- debugLogs end: cilium-244000 [took: 2.289554375s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-244000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-244000
--- SKIP: TestNetworkPlugins/group/cilium (2.51s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-344000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-344000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.25s)

                                                
                                    
Copied to clipboard