Test Report: QEMU_macOS 18429

                    
                      ce47e36c27c610c668eed9e63157fcf5091ee2ba:2024-03-18:33630
                    
                

Test fail (156/266)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 43.03
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 10.27
36 TestAddons/Setup 10.26
37 TestCertOptions 10.15
38 TestCertExpiration 195.32
39 TestDockerFlags 10.07
40 TestForceSystemdFlag 11.33
41 TestForceSystemdEnv 10.16
47 TestErrorSpam/setup 9.88
56 TestFunctional/serial/StartWithProxy 10.27
58 TestFunctional/serial/SoftStart 5.26
59 TestFunctional/serial/KubeContext 0.06
60 TestFunctional/serial/KubectlGetPods 0.06
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.05
68 TestFunctional/serial/CacheCmd/cache/cache_reload 0.17
70 TestFunctional/serial/MinikubeKubectlCmd 0.56
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.72
72 TestFunctional/serial/ExtraConfig 5.27
73 TestFunctional/serial/ComponentHealth 0.06
74 TestFunctional/serial/LogsCmd 0.08
75 TestFunctional/serial/LogsFileCmd 0.07
76 TestFunctional/serial/InvalidService 0.03
79 TestFunctional/parallel/DashboardCmd 0.2
82 TestFunctional/parallel/StatusCmd 0.13
86 TestFunctional/parallel/ServiceCmdConnect 0.14
88 TestFunctional/parallel/PersistentVolumeClaim 0.03
90 TestFunctional/parallel/SSHCmd 0.12
91 TestFunctional/parallel/CpCmd 0.28
93 TestFunctional/parallel/FileSync 0.08
94 TestFunctional/parallel/CertSync 0.3
98 TestFunctional/parallel/NodeLabels 0.06
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.04
104 TestFunctional/parallel/Version/components 0.04
105 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
106 TestFunctional/parallel/ImageCommands/ImageListTable 0.04
107 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
108 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
109 TestFunctional/parallel/ImageCommands/ImageBuild 0.12
111 TestFunctional/parallel/DockerEnv/bash 0.05
112 TestFunctional/parallel/UpdateContextCmd/no_changes 0.04
113 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.04
114 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
115 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
116 TestFunctional/parallel/ServiceCmd/List 0.05
117 TestFunctional/parallel/ServiceCmd/JSONOutput 0.04
118 TestFunctional/parallel/ServiceCmd/HTTPS 0.04
119 TestFunctional/parallel/ServiceCmd/Format 0.04
120 TestFunctional/parallel/ServiceCmd/URL 0.04
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.08
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 102.84
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.34
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.35
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.71
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.07
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.07
142 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 29.28
150 TestMultiControlPlane/serial/StartCluster 9.99
151 TestMultiControlPlane/serial/DeployApp 107.04
152 TestMultiControlPlane/serial/PingHostFromPods 0.09
153 TestMultiControlPlane/serial/AddWorkerNode 0.08
154 TestMultiControlPlane/serial/NodeLabels 0.06
155 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.11
156 TestMultiControlPlane/serial/CopyFile 0.07
157 TestMultiControlPlane/serial/StopSecondaryNode 0.11
158 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.1
159 TestMultiControlPlane/serial/RestartSecondaryNode 48.72
160 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.11
161 TestMultiControlPlane/serial/RestartClusterKeepsNodes 7.16
162 TestMultiControlPlane/serial/DeleteSecondaryNode 0.11
163 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.1
164 TestMultiControlPlane/serial/StopCluster 2.22
165 TestMultiControlPlane/serial/RestartCluster 5.25
166 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.11
167 TestMultiControlPlane/serial/AddSecondaryNode 0.08
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.1
171 TestImageBuild/serial/Setup 9.98
174 TestJSONOutput/start/Command 9.78
180 TestJSONOutput/pause/Command 0.08
186 TestJSONOutput/unpause/Command 0.05
203 TestMinikubeProfile 10.82
206 TestMountStart/serial/StartWithMountFirst 10.96
209 TestMultiNode/serial/FreshStart2Nodes 9.85
210 TestMultiNode/serial/DeployApp2Nodes 119.63
211 TestMultiNode/serial/PingHostFrom2Pods 0.09
212 TestMultiNode/serial/AddNode 0.08
213 TestMultiNode/serial/MultiNodeLabels 0.06
214 TestMultiNode/serial/ProfileList 0.11
215 TestMultiNode/serial/CopyFile 0.06
216 TestMultiNode/serial/StopNode 0.14
217 TestMultiNode/serial/StartAfterStop 57.22
218 TestMultiNode/serial/RestartKeepsNodes 8.27
219 TestMultiNode/serial/DeleteNode 0.11
220 TestMultiNode/serial/StopMultiNode 2.19
221 TestMultiNode/serial/RestartMultiNode 5.28
222 TestMultiNode/serial/ValidateNameConflict 20.22
226 TestPreload 10.07
228 TestScheduledStopUnix 10.02
229 TestSkaffold 16.51
232 TestRunningBinaryUpgrade 633.31
234 TestKubernetesUpgrade 18.77
247 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.29
248 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.28
250 TestStoppedBinaryUpgrade/Upgrade 581.57
252 TestPause/serial/Start 9.94
262 TestNoKubernetes/serial/StartWithK8s 10
263 TestNoKubernetes/serial/StartWithStopK8s 5.92
264 TestNoKubernetes/serial/Start 5.87
268 TestNoKubernetes/serial/StartNoArgs 5.92
270 TestNetworkPlugins/group/auto/Start 9.88
271 TestNetworkPlugins/group/kindnet/Start 9.9
272 TestNetworkPlugins/group/calico/Start 10
273 TestNetworkPlugins/group/custom-flannel/Start 9.85
274 TestNetworkPlugins/group/false/Start 9.81
275 TestNetworkPlugins/group/enable-default-cni/Start 9.83
276 TestNetworkPlugins/group/flannel/Start 9.97
278 TestNetworkPlugins/group/bridge/Start 9.72
279 TestNetworkPlugins/group/kubenet/Start 9.9
281 TestStartStop/group/old-k8s-version/serial/FirstStart 10.04
283 TestStartStop/group/no-preload/serial/FirstStart 9.89
284 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
285 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.11
287 TestStartStop/group/no-preload/serial/DeployApp 0.1
288 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
290 TestStartStop/group/old-k8s-version/serial/SecondStart 5.27
293 TestStartStop/group/no-preload/serial/SecondStart 5.28
294 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
295 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
296 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
297 TestStartStop/group/old-k8s-version/serial/Pause 0.11
299 TestStartStop/group/embed-certs/serial/FirstStart 10.19
300 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
301 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
302 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
303 TestStartStop/group/no-preload/serial/Pause 0.1
305 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.88
306 TestStartStop/group/embed-certs/serial/DeployApp 0.09
307 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
309 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
311 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.12
312 TestStartStop/group/embed-certs/serial/SecondStart 5.29
315 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.5
316 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.03
317 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
318 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
319 TestStartStop/group/embed-certs/serial/Pause 0.11
321 TestStartStop/group/newest-cni/serial/FirstStart 10.1
322 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
323 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
324 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
325 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.11
330 TestStartStop/group/newest-cni/serial/SecondStart 5.26
333 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.08
334 TestStartStop/group/newest-cni/serial/Pause 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (43.03s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-382000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-382000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (43.033131375s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2ffd5636-d981-4418-8d17-4404db16a04c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-382000] minikube v1.32.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e0adafea-68ff-4285-aa54-b26a7149a3a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18429"}}
	{"specversion":"1.0","id":"f207fd74-0451-4d90-a185-d14a53f7e1d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig"}}
	{"specversion":"1.0","id":"c8b8dd10-b556-4b28-94af-e0ad0795c725","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"2ec6964c-f92c-4c6a-8d59-1386167bad1b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"08ef10fd-b2de-4b7e-aa69-680a182a79ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube"}}
	{"specversion":"1.0","id":"57d3c3b7-ec51-459c-b22d-73882cf8feca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"b48e31e4-3348-4f1d-b82b-7659aa4c28ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"282fbd57-e712-4ef2-8ef2-2b2635601693","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"16d6cb04-bd0c-4cc0-9d76-7b6f5cb76a49","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"0f22a9dd-badf-4a70-abd1-dc0e8c728a7d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-382000\" primary control-plane node in \"download-only-382000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"b902e2db-a7c2-4ea8-a949-0ea13d0132c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"385ce2fe-4e13-40e1-b7a4-9c9dde6f9421","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18429-15072/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1087b7520 0x1087b7520 0x1087b7520 0x1087b7520 0x1087b7520 0x1087b7520 0x1087b7520] Decompressors:map[bz2:0x1400049e600 gz:0x1400049e608 tar:0x1400049e590 tar.bz2:0x1400049e5c0 tar.gz:0x1400049e5d0 tar.xz:0x1400049e5e0 tar.zst:0x1400049e5f0 tbz2:0x1400049e5c0 tgz:0x1
400049e5d0 txz:0x1400049e5e0 tzst:0x1400049e5f0 xz:0x1400049e610 zip:0x1400049e620 zst:0x1400049e618] Getters:map[file:0x14000994820 http:0x140000fe2d0 https:0x140000fe320] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"aa72cf09-c3b4-4649-ad0f-228e640b1128","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:18:39.230977   15483 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:18:39.231145   15483 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:18:39.231148   15483 out.go:304] Setting ErrFile to fd 2...
	I0318 04:18:39.231150   15483 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:18:39.231278   15483 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	W0318 04:18:39.231361   15483 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18429-15072/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18429-15072/.minikube/config/config.json: no such file or directory
	I0318 04:18:39.232580   15483 out.go:298] Setting JSON to true
	I0318 04:18:39.250261   15483 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8292,"bootTime":1710752427,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:18:39.250322   15483 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:18:39.254531   15483 out.go:97] [download-only-382000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:18:39.257549   15483 out.go:169] MINIKUBE_LOCATION=18429
	I0318 04:18:39.254641   15483 notify.go:220] Checking for updates...
	W0318 04:18:39.254671   15483 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball: no such file or directory
	I0318 04:18:39.262579   15483 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:18:39.265579   15483 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:18:39.266816   15483 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:18:39.269573   15483 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	W0318 04:18:39.275496   15483 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0318 04:18:39.275697   15483 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:18:39.278453   15483 out.go:97] Using the qemu2 driver based on user configuration
	I0318 04:18:39.278469   15483 start.go:297] selected driver: qemu2
	I0318 04:18:39.278482   15483 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:18:39.278552   15483 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:18:39.281474   15483 out.go:169] Automatically selected the socket_vmnet network
	I0318 04:18:39.286861   15483 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0318 04:18:39.286955   15483 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 04:18:39.287053   15483 cni.go:84] Creating CNI manager for ""
	I0318 04:18:39.287071   15483 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0318 04:18:39.287125   15483 start.go:340] cluster config:
	{Name:download-only-382000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-382000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:18:39.291876   15483 iso.go:125] acquiring lock: {Name:mkb8143674083e0c7a46a3ed751b3800392bcd24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:18:39.295485   15483 out.go:97] Downloading VM boot image ...
	I0318 04:18:39.295502   15483 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso
	I0318 04:18:57.584226   15483 out.go:97] Starting "download-only-382000" primary control-plane node in "download-only-382000" cluster
	I0318 04:18:57.584254   15483 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0318 04:18:57.870373   15483 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0318 04:18:57.870489   15483 cache.go:56] Caching tarball of preloaded images
	I0318 04:18:57.871230   15483 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0318 04:18:57.876768   15483 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0318 04:18:57.876794   15483 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0318 04:18:58.504598   15483 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0318 04:19:20.811718   15483 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0318 04:19:20.811902   15483 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0318 04:19:21.509798   15483 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0318 04:19:21.509998   15483 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/download-only-382000/config.json ...
	I0318 04:19:21.510015   15483 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/download-only-382000/config.json: {Name:mk22c27bdb892f0dc2ab4a43abb8a08bd0f554e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:19:21.511110   15483 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0318 04:19:21.511299   15483 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0318 04:19:22.182258   15483 out.go:169] 
	W0318 04:19:22.187192   15483 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18429-15072/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1087b7520 0x1087b7520 0x1087b7520 0x1087b7520 0x1087b7520 0x1087b7520 0x1087b7520] Decompressors:map[bz2:0x1400049e600 gz:0x1400049e608 tar:0x1400049e590 tar.bz2:0x1400049e5c0 tar.gz:0x1400049e5d0 tar.xz:0x1400049e5e0 tar.zst:0x1400049e5f0 tbz2:0x1400049e5c0 tgz:0x1400049e5d0 txz:0x1400049e5e0 tzst:0x1400049e5f0 xz:0x1400049e610 zip:0x1400049e620 zst:0x1400049e618] Getters:map[file:0x14000994820 http:0x140000fe2d0 https:0x140000fe320] Dir:false ProgressLis
tener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0318 04:19:22.187225   15483 out_reason.go:110] 
	W0318 04:19:22.195197   15483 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:19:22.199226   15483 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-382000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (43.03s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/18429-15072/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.27s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-417000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-417000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (10.106592541s)

                                                
                                                
-- stdout --
	* [offline-docker-417000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-417000" primary control-plane node in "offline-docker-417000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-417000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:31:56.243137   17019 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:31:56.243264   17019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:31:56.243268   17019 out.go:304] Setting ErrFile to fd 2...
	I0318 04:31:56.243274   17019 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:31:56.243397   17019 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:31:56.244467   17019 out.go:298] Setting JSON to false
	I0318 04:31:56.261978   17019 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":9089,"bootTime":1710752427,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:31:56.262056   17019 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:31:56.267071   17019 out.go:177] * [offline-docker-417000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:31:56.275176   17019 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 04:31:56.279158   17019 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:31:56.275179   17019 notify.go:220] Checking for updates...
	I0318 04:31:56.282124   17019 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:31:56.285086   17019 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:31:56.288078   17019 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	I0318 04:31:56.291074   17019 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:31:56.294479   17019 config.go:182] Loaded profile config "multinode-969000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:31:56.294545   17019 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:31:56.298067   17019 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 04:31:56.305122   17019 start.go:297] selected driver: qemu2
	I0318 04:31:56.305131   17019 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:31:56.305138   17019 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:31:56.307273   17019 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:31:56.310070   17019 out.go:177] * Automatically selected the socket_vmnet network
	I0318 04:31:56.313109   17019 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:31:56.313146   17019 cni.go:84] Creating CNI manager for ""
	I0318 04:31:56.313153   17019 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:31:56.313156   17019 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 04:31:56.313194   17019 start.go:340] cluster config:
	{Name:offline-docker-417000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-417000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:31:56.317834   17019 iso.go:125] acquiring lock: {Name:mkb8143674083e0c7a46a3ed751b3800392bcd24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:31:56.326065   17019 out.go:177] * Starting "offline-docker-417000" primary control-plane node in "offline-docker-417000" cluster
	I0318 04:31:56.330090   17019 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:31:56.330122   17019 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:31:56.330141   17019 cache.go:56] Caching tarball of preloaded images
	I0318 04:31:56.330215   17019 preload.go:173] Found /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:31:56.330221   17019 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:31:56.330295   17019 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/offline-docker-417000/config.json ...
	I0318 04:31:56.330305   17019 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/offline-docker-417000/config.json: {Name:mk2337c30a173e5f910d75f4f7833aedde866fa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:31:56.330607   17019 start.go:360] acquireMachinesLock for offline-docker-417000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:31:56.330636   17019 start.go:364] duration metric: took 22.792µs to acquireMachinesLock for "offline-docker-417000"
	I0318 04:31:56.330649   17019 start.go:93] Provisioning new machine with config: &{Name:offline-docker-417000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-417000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:31:56.330678   17019 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:31:56.339097   17019 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0318 04:31:56.354595   17019 start.go:159] libmachine.API.Create for "offline-docker-417000" (driver="qemu2")
	I0318 04:31:56.354631   17019 client.go:168] LocalClient.Create starting
	I0318 04:31:56.354705   17019 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem
	I0318 04:31:56.354736   17019 main.go:141] libmachine: Decoding PEM data...
	I0318 04:31:56.354746   17019 main.go:141] libmachine: Parsing certificate...
	I0318 04:31:56.354794   17019 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem
	I0318 04:31:56.354816   17019 main.go:141] libmachine: Decoding PEM data...
	I0318 04:31:56.354822   17019 main.go:141] libmachine: Parsing certificate...
	I0318 04:31:56.355199   17019 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18429-15072/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:31:56.498328   17019 main.go:141] libmachine: Creating SSH key...
	I0318 04:31:56.668818   17019 main.go:141] libmachine: Creating Disk image...
	I0318 04:31:56.668829   17019 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:31:56.669126   17019 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/offline-docker-417000/disk.qcow2.raw /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/offline-docker-417000/disk.qcow2
	I0318 04:31:56.682550   17019 main.go:141] libmachine: STDOUT: 
	I0318 04:31:56.682576   17019 main.go:141] libmachine: STDERR: 
	I0318 04:31:56.682665   17019 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/offline-docker-417000/disk.qcow2 +20000M
	I0318 04:31:56.695857   17019 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:31:56.695886   17019 main.go:141] libmachine: STDERR: 
	I0318 04:31:56.695904   17019 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/offline-docker-417000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/offline-docker-417000/disk.qcow2
	I0318 04:31:56.695908   17019 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:31:56.695938   17019 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/offline-docker-417000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/offline-docker-417000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/offline-docker-417000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:80:3a:f0:bc:a8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/offline-docker-417000/disk.qcow2
	I0318 04:31:56.697815   17019 main.go:141] libmachine: STDOUT: 
	I0318 04:31:56.697833   17019 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:31:56.697853   17019 client.go:171] duration metric: took 343.228666ms to LocalClient.Create
	I0318 04:31:58.698475   17019 start.go:128] duration metric: took 2.367870209s to createHost
	I0318 04:31:58.698499   17019 start.go:83] releasing machines lock for "offline-docker-417000", held for 2.367931584s
	W0318 04:31:58.698518   17019 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:31:58.706422   17019 out.go:177] * Deleting "offline-docker-417000" in qemu2 ...
	W0318 04:31:58.717482   17019 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:31:58.717495   17019 start.go:728] Will try again in 5 seconds ...
	I0318 04:32:03.719550   17019 start.go:360] acquireMachinesLock for offline-docker-417000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:32:03.719977   17019 start.go:364] duration metric: took 326.667µs to acquireMachinesLock for "offline-docker-417000"
	I0318 04:32:03.720092   17019 start.go:93] Provisioning new machine with config: &{Name:offline-docker-417000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:offline-docker-417000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:32:03.720357   17019 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:32:03.729344   17019 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0318 04:32:03.776548   17019 start.go:159] libmachine.API.Create for "offline-docker-417000" (driver="qemu2")
	I0318 04:32:03.776597   17019 client.go:168] LocalClient.Create starting
	I0318 04:32:03.776731   17019 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem
	I0318 04:32:03.776787   17019 main.go:141] libmachine: Decoding PEM data...
	I0318 04:32:03.776808   17019 main.go:141] libmachine: Parsing certificate...
	I0318 04:32:03.776913   17019 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem
	I0318 04:32:03.776963   17019 main.go:141] libmachine: Decoding PEM data...
	I0318 04:32:03.776981   17019 main.go:141] libmachine: Parsing certificate...
	I0318 04:32:03.777498   17019 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18429-15072/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:32:03.927449   17019 main.go:141] libmachine: Creating SSH key...
	I0318 04:32:04.256929   17019 main.go:141] libmachine: Creating Disk image...
	I0318 04:32:04.256942   17019 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:32:04.257150   17019 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/offline-docker-417000/disk.qcow2.raw /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/offline-docker-417000/disk.qcow2
	I0318 04:32:04.269768   17019 main.go:141] libmachine: STDOUT: 
	I0318 04:32:04.269792   17019 main.go:141] libmachine: STDERR: 
	I0318 04:32:04.269849   17019 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/offline-docker-417000/disk.qcow2 +20000M
	I0318 04:32:04.280383   17019 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:32:04.280399   17019 main.go:141] libmachine: STDERR: 
	I0318 04:32:04.280410   17019 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/offline-docker-417000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/offline-docker-417000/disk.qcow2
	I0318 04:32:04.280418   17019 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:32:04.280499   17019 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/offline-docker-417000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/offline-docker-417000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/offline-docker-417000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:8c:83:e3:65:9e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/offline-docker-417000/disk.qcow2
	I0318 04:32:04.282231   17019 main.go:141] libmachine: STDOUT: 
	I0318 04:32:04.282246   17019 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:32:04.282259   17019 client.go:171] duration metric: took 505.673333ms to LocalClient.Create
	I0318 04:32:06.282686   17019 start.go:128] duration metric: took 2.5623585s to createHost
	I0318 04:32:06.282766   17019 start.go:83] releasing machines lock for "offline-docker-417000", held for 2.562851542s
	W0318 04:32:06.283006   17019 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-417000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-417000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:32:06.291143   17019 out.go:177] 
	W0318 04:32:06.294126   17019 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:32:06.294176   17019 out.go:239] * 
	* 
	W0318 04:32:06.296796   17019 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:32:06.305143   17019 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-417000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-03-18 04:32:06.318652 -0700 PDT m=+807.198719334
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-417000 -n offline-docker-417000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-417000 -n offline-docker-417000: exit status 7 (47.822208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-417000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-417000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-417000
--- FAIL: TestOffline (10.27s)

                                                
                                    
x
+
TestAddons/Setup (10.26s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-118000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-118000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (10.253257417s)

                                                
                                                
-- stdout --
	* [addons-118000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-118000" primary control-plane node in "addons-118000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-118000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:20:07.947365   15640 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:20:07.947517   15640 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:20:07.947520   15640 out.go:304] Setting ErrFile to fd 2...
	I0318 04:20:07.947522   15640 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:20:07.947648   15640 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:20:07.948734   15640 out.go:298] Setting JSON to false
	I0318 04:20:07.964733   15640 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8380,"bootTime":1710752427,"procs":469,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:20:07.964802   15640 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:20:07.969959   15640 out.go:177] * [addons-118000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:20:07.976923   15640 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 04:20:07.976981   15640 notify.go:220] Checking for updates...
	I0318 04:20:07.983873   15640 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:20:07.986867   15640 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:20:07.989902   15640 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:20:07.992955   15640 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	I0318 04:20:07.995804   15640 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:20:07.998990   15640 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:20:08.002856   15640 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 04:20:08.009891   15640 start.go:297] selected driver: qemu2
	I0318 04:20:08.009898   15640 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:20:08.009904   15640 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:20:08.012188   15640 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:20:08.015875   15640 out.go:177] * Automatically selected the socket_vmnet network
	I0318 04:20:08.017489   15640 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:20:08.017538   15640 cni.go:84] Creating CNI manager for ""
	I0318 04:20:08.017545   15640 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:20:08.017550   15640 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 04:20:08.017573   15640 start.go:340] cluster config:
	{Name:addons-118000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-118000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:20:08.022000   15640 iso.go:125] acquiring lock: {Name:mkb8143674083e0c7a46a3ed751b3800392bcd24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:20:08.029922   15640 out.go:177] * Starting "addons-118000" primary control-plane node in "addons-118000" cluster
	I0318 04:20:08.033854   15640 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:20:08.033877   15640 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:20:08.033889   15640 cache.go:56] Caching tarball of preloaded images
	I0318 04:20:08.033949   15640 preload.go:173] Found /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:20:08.033957   15640 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:20:08.034194   15640 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/addons-118000/config.json ...
	I0318 04:20:08.034208   15640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/addons-118000/config.json: {Name:mke727af3687d214a8211de54057174cb340c734 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:20:08.034438   15640 start.go:360] acquireMachinesLock for addons-118000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:20:08.034576   15640 start.go:364] duration metric: took 131.792µs to acquireMachinesLock for "addons-118000"
	I0318 04:20:08.034589   15640 start.go:93] Provisioning new machine with config: &{Name:addons-118000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:addons-118000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:20:08.034621   15640 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:20:08.038915   15640 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0318 04:20:08.057300   15640 start.go:159] libmachine.API.Create for "addons-118000" (driver="qemu2")
	I0318 04:20:08.057326   15640 client.go:168] LocalClient.Create starting
	I0318 04:20:08.057465   15640 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem
	I0318 04:20:08.115429   15640 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem
	I0318 04:20:08.240690   15640 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18429-15072/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:20:08.485993   15640 main.go:141] libmachine: Creating SSH key...
	I0318 04:20:08.606162   15640 main.go:141] libmachine: Creating Disk image...
	I0318 04:20:08.606169   15640 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:20:08.606372   15640 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/addons-118000/disk.qcow2.raw /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/addons-118000/disk.qcow2
	I0318 04:20:08.618744   15640 main.go:141] libmachine: STDOUT: 
	I0318 04:20:08.618768   15640 main.go:141] libmachine: STDERR: 
	I0318 04:20:08.618823   15640 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/addons-118000/disk.qcow2 +20000M
	I0318 04:20:08.629416   15640 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:20:08.629431   15640 main.go:141] libmachine: STDERR: 
	I0318 04:20:08.629443   15640 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/addons-118000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/addons-118000/disk.qcow2
	I0318 04:20:08.629448   15640 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:20:08.629492   15640 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/addons-118000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/addons-118000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/addons-118000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:24:e2:96:8b:fc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/addons-118000/disk.qcow2
	I0318 04:20:08.631199   15640 main.go:141] libmachine: STDOUT: 
	I0318 04:20:08.631214   15640 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:20:08.631232   15640 client.go:171] duration metric: took 573.919875ms to LocalClient.Create
	I0318 04:20:10.633370   15640 start.go:128] duration metric: took 2.598812625s to createHost
	I0318 04:20:10.633452   15640 start.go:83] releasing machines lock for "addons-118000", held for 2.598952667s
	W0318 04:20:10.633548   15640 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:20:10.646520   15640 out.go:177] * Deleting "addons-118000" in qemu2 ...
	W0318 04:20:10.671400   15640 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:20:10.671434   15640 start.go:728] Will try again in 5 seconds ...
	I0318 04:20:15.673467   15640 start.go:360] acquireMachinesLock for addons-118000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:20:15.673856   15640 start.go:364] duration metric: took 295.292µs to acquireMachinesLock for "addons-118000"
	I0318 04:20:15.673988   15640 start.go:93] Provisioning new machine with config: &{Name:addons-118000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:addons-118000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:20:15.674262   15640 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:20:15.681010   15640 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0318 04:20:15.727563   15640 start.go:159] libmachine.API.Create for "addons-118000" (driver="qemu2")
	I0318 04:20:15.727605   15640 client.go:168] LocalClient.Create starting
	I0318 04:20:15.727711   15640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem
	I0318 04:20:15.727763   15640 main.go:141] libmachine: Decoding PEM data...
	I0318 04:20:15.727780   15640 main.go:141] libmachine: Parsing certificate...
	I0318 04:20:15.727870   15640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem
	I0318 04:20:15.727912   15640 main.go:141] libmachine: Decoding PEM data...
	I0318 04:20:15.727925   15640 main.go:141] libmachine: Parsing certificate...
	I0318 04:20:15.728448   15640 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18429-15072/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:20:15.879582   15640 main.go:141] libmachine: Creating SSH key...
	I0318 04:20:16.101558   15640 main.go:141] libmachine: Creating Disk image...
	I0318 04:20:16.101566   15640 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:20:16.101833   15640 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/addons-118000/disk.qcow2.raw /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/addons-118000/disk.qcow2
	I0318 04:20:16.114814   15640 main.go:141] libmachine: STDOUT: 
	I0318 04:20:16.114834   15640 main.go:141] libmachine: STDERR: 
	I0318 04:20:16.114911   15640 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/addons-118000/disk.qcow2 +20000M
	I0318 04:20:16.125617   15640 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:20:16.125647   15640 main.go:141] libmachine: STDERR: 
	I0318 04:20:16.125662   15640 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/addons-118000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/addons-118000/disk.qcow2
	I0318 04:20:16.125673   15640 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:20:16.125702   15640 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/addons-118000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/addons-118000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/addons-118000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:4d:70:e5:51:5a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/addons-118000/disk.qcow2
	I0318 04:20:16.127532   15640 main.go:141] libmachine: STDOUT: 
	I0318 04:20:16.127547   15640 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:20:16.127561   15640 client.go:171] duration metric: took 399.963208ms to LocalClient.Create
	I0318 04:20:18.128228   15640 start.go:128] duration metric: took 2.454001709s to createHost
	I0318 04:20:18.128314   15640 start.go:83] releasing machines lock for "addons-118000", held for 2.454516541s
	W0318 04:20:18.128752   15640 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p addons-118000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-118000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:20:18.140319   15640 out.go:177] 
	W0318 04:20:18.144289   15640 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:20:18.144349   15640 out.go:239] * 
	* 
	W0318 04:20:18.147069   15640 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:20:18.153830   15640 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:111: out/minikube-darwin-arm64 start -p addons-118000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (10.26s)

                                                
                                    
x
+
TestCertOptions (10.15s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-834000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-834000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.851889458s)

                                                
                                                
-- stdout --
	* [cert-options-834000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-834000" primary control-plane node in "cert-options-834000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-834000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-834000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-834000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-834000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-834000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (83.433167ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-834000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-834000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-834000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-834000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-834000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-834000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (43.948208ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-834000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-834000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-834000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-834000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-834000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-03-18 04:32:36.708626 -0700 PDT m=+837.589707417
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-834000 -n cert-options-834000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-834000 -n cert-options-834000: exit status 7 (32.204083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-834000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-834000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-834000
--- FAIL: TestCertOptions (10.15s)

                                                
                                    
x
+
TestCertExpiration (195.32s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-548000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-548000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (9.927467209s)

                                                
                                                
-- stdout --
	* [cert-expiration-548000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-548000" primary control-plane node in "cert-expiration-548000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-548000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-548000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-548000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-548000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-548000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.220421292s)

                                                
                                                
-- stdout --
	* [cert-expiration-548000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-548000" primary control-plane node in "cert-expiration-548000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-548000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-548000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-548000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-548000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-548000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-548000" primary control-plane node in "cert-expiration-548000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-548000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-548000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-548000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-03-18 04:35:36.743609 -0700 PDT m=+1017.630697917
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-548000 -n cert-expiration-548000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-548000 -n cert-expiration-548000: exit status 7 (62.041666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-548000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-548000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-548000
--- FAIL: TestCertExpiration (195.32s)

                                                
                                    
x
+
TestDockerFlags (10.07s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-569000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-569000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.802409792s)

                                                
                                                
-- stdout --
	* [docker-flags-569000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-569000" primary control-plane node in "docker-flags-569000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-569000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:32:16.659708   17217 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:32:16.659838   17217 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:32:16.659841   17217 out.go:304] Setting ErrFile to fd 2...
	I0318 04:32:16.659843   17217 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:32:16.659965   17217 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:32:16.661045   17217 out.go:298] Setting JSON to false
	I0318 04:32:16.677151   17217 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":9109,"bootTime":1710752427,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:32:16.677211   17217 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:32:16.683505   17217 out.go:177] * [docker-flags-569000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:32:16.691447   17217 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 04:32:16.694483   17217 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:32:16.691511   17217 notify.go:220] Checking for updates...
	I0318 04:32:16.701447   17217 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:32:16.704520   17217 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:32:16.707438   17217 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	I0318 04:32:16.710424   17217 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:32:16.713881   17217 config.go:182] Loaded profile config "force-systemd-flag-517000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:32:16.713953   17217 config.go:182] Loaded profile config "multinode-969000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:32:16.713998   17217 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:32:16.721473   17217 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 04:32:16.728461   17217 start.go:297] selected driver: qemu2
	I0318 04:32:16.728468   17217 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:32:16.728474   17217 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:32:16.730967   17217 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:32:16.735429   17217 out.go:177] * Automatically selected the socket_vmnet network
	I0318 04:32:16.738579   17217 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0318 04:32:16.738626   17217 cni.go:84] Creating CNI manager for ""
	I0318 04:32:16.738634   17217 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:32:16.738644   17217 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 04:32:16.738678   17217 start.go:340] cluster config:
	{Name:docker-flags-569000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-569000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:32:16.743567   17217 iso.go:125] acquiring lock: {Name:mkb8143674083e0c7a46a3ed751b3800392bcd24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:32:16.751441   17217 out.go:177] * Starting "docker-flags-569000" primary control-plane node in "docker-flags-569000" cluster
	I0318 04:32:16.755457   17217 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:32:16.755475   17217 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:32:16.755491   17217 cache.go:56] Caching tarball of preloaded images
	I0318 04:32:16.755561   17217 preload.go:173] Found /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:32:16.755568   17217 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:32:16.755654   17217 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/docker-flags-569000/config.json ...
	I0318 04:32:16.755666   17217 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/docker-flags-569000/config.json: {Name:mka98b2f6b9ac4bddf8fd90a55d3438f7a5f06fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:32:16.755915   17217 start.go:360] acquireMachinesLock for docker-flags-569000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:32:16.755953   17217 start.go:364] duration metric: took 30.167µs to acquireMachinesLock for "docker-flags-569000"
	I0318 04:32:16.755968   17217 start.go:93] Provisioning new machine with config: &{Name:docker-flags-569000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-569000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:32:16.755999   17217 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:32:16.764457   17217 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0318 04:32:16.783343   17217 start.go:159] libmachine.API.Create for "docker-flags-569000" (driver="qemu2")
	I0318 04:32:16.783374   17217 client.go:168] LocalClient.Create starting
	I0318 04:32:16.783442   17217 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem
	I0318 04:32:16.783473   17217 main.go:141] libmachine: Decoding PEM data...
	I0318 04:32:16.783484   17217 main.go:141] libmachine: Parsing certificate...
	I0318 04:32:16.783538   17217 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem
	I0318 04:32:16.783561   17217 main.go:141] libmachine: Decoding PEM data...
	I0318 04:32:16.783570   17217 main.go:141] libmachine: Parsing certificate...
	I0318 04:32:16.783951   17217 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18429-15072/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:32:16.925460   17217 main.go:141] libmachine: Creating SSH key...
	I0318 04:32:17.035893   17217 main.go:141] libmachine: Creating Disk image...
	I0318 04:32:17.035899   17217 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:32:17.036083   17217 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/docker-flags-569000/disk.qcow2.raw /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/docker-flags-569000/disk.qcow2
	I0318 04:32:17.048307   17217 main.go:141] libmachine: STDOUT: 
	I0318 04:32:17.048324   17217 main.go:141] libmachine: STDERR: 
	I0318 04:32:17.048387   17217 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/docker-flags-569000/disk.qcow2 +20000M
	I0318 04:32:17.059097   17217 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:32:17.059112   17217 main.go:141] libmachine: STDERR: 
	I0318 04:32:17.059128   17217 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/docker-flags-569000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/docker-flags-569000/disk.qcow2
	I0318 04:32:17.059133   17217 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:32:17.059175   17217 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/docker-flags-569000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/docker-flags-569000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/docker-flags-569000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:ed:c4:7e:10:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/docker-flags-569000/disk.qcow2
	I0318 04:32:17.060877   17217 main.go:141] libmachine: STDOUT: 
	I0318 04:32:17.060890   17217 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:32:17.060909   17217 client.go:171] duration metric: took 277.538875ms to LocalClient.Create
	I0318 04:32:19.063075   17217 start.go:128] duration metric: took 2.307122083s to createHost
	I0318 04:32:19.063189   17217 start.go:83] releasing machines lock for "docker-flags-569000", held for 2.30730175s
	W0318 04:32:19.063303   17217 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:32:19.080472   17217 out.go:177] * Deleting "docker-flags-569000" in qemu2 ...
	W0318 04:32:19.100847   17217 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:32:19.100868   17217 start.go:728] Will try again in 5 seconds ...
	I0318 04:32:24.102928   17217 start.go:360] acquireMachinesLock for docker-flags-569000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:32:24.103307   17217 start.go:364] duration metric: took 271.542µs to acquireMachinesLock for "docker-flags-569000"
	I0318 04:32:24.103442   17217 start.go:93] Provisioning new machine with config: &{Name:docker-flags-569000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:docker-flags-569000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:32:24.103713   17217 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:32:24.113092   17217 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0318 04:32:24.162045   17217 start.go:159] libmachine.API.Create for "docker-flags-569000" (driver="qemu2")
	I0318 04:32:24.162098   17217 client.go:168] LocalClient.Create starting
	I0318 04:32:24.162216   17217 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem
	I0318 04:32:24.162271   17217 main.go:141] libmachine: Decoding PEM data...
	I0318 04:32:24.162288   17217 main.go:141] libmachine: Parsing certificate...
	I0318 04:32:24.162348   17217 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem
	I0318 04:32:24.162390   17217 main.go:141] libmachine: Decoding PEM data...
	I0318 04:32:24.162412   17217 main.go:141] libmachine: Parsing certificate...
	I0318 04:32:24.163596   17217 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18429-15072/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:32:24.322610   17217 main.go:141] libmachine: Creating SSH key...
	I0318 04:32:24.358565   17217 main.go:141] libmachine: Creating Disk image...
	I0318 04:32:24.358570   17217 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:32:24.358747   17217 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/docker-flags-569000/disk.qcow2.raw /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/docker-flags-569000/disk.qcow2
	I0318 04:32:24.371059   17217 main.go:141] libmachine: STDOUT: 
	I0318 04:32:24.371091   17217 main.go:141] libmachine: STDERR: 
	I0318 04:32:24.371150   17217 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/docker-flags-569000/disk.qcow2 +20000M
	I0318 04:32:24.382036   17217 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:32:24.382051   17217 main.go:141] libmachine: STDERR: 
	I0318 04:32:24.382065   17217 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/docker-flags-569000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/docker-flags-569000/disk.qcow2
	I0318 04:32:24.382069   17217 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:32:24.382108   17217 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/docker-flags-569000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/docker-flags-569000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/docker-flags-569000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:91:73:6a:17:e4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/docker-flags-569000/disk.qcow2
	I0318 04:32:24.383839   17217 main.go:141] libmachine: STDOUT: 
	I0318 04:32:24.383854   17217 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:32:24.383866   17217 client.go:171] duration metric: took 221.771083ms to LocalClient.Create
	I0318 04:32:26.386078   17217 start.go:128] duration metric: took 2.282379584s to createHost
	I0318 04:32:26.386167   17217 start.go:83] releasing machines lock for "docker-flags-569000", held for 2.282910209s
	W0318 04:32:26.386655   17217 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-569000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-569000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:32:26.401232   17217 out.go:177] 
	W0318 04:32:26.404246   17217 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:32:26.404284   17217 out.go:239] * 
	* 
	W0318 04:32:26.407534   17217 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:32:26.418200   17217 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-569000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-569000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-569000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (82.706333ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-569000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-569000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-569000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-569000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-569000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-569000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-569000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-569000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-569000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (48.833083ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-569000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-569000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-569000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-569000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-569000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-569000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-03-18 04:32:26.566786 -0700 PDT m=+827.447528959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-569000 -n docker-flags-569000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-569000 -n docker-flags-569000: exit status 7 (31.3055ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-569000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-569000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-569000
--- FAIL: TestDockerFlags (10.07s)

                                                
                                    
x
+
TestForceSystemdFlag (11.33s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-517000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-517000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (11.10918425s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-517000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-517000" primary control-plane node in "force-systemd-flag-517000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-517000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:32:10.296859   17195 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:32:10.297015   17195 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:32:10.297019   17195 out.go:304] Setting ErrFile to fd 2...
	I0318 04:32:10.297021   17195 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:32:10.297148   17195 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:32:10.298173   17195 out.go:298] Setting JSON to false
	I0318 04:32:10.314014   17195 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":9103,"bootTime":1710752427,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:32:10.314076   17195 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:32:10.320726   17195 out.go:177] * [force-systemd-flag-517000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:32:10.328665   17195 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 04:32:10.333717   17195 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:32:10.328696   17195 notify.go:220] Checking for updates...
	I0318 04:32:10.341602   17195 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:32:10.345657   17195 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:32:10.348726   17195 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	I0318 04:32:10.351651   17195 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:32:10.355036   17195 config.go:182] Loaded profile config "force-systemd-env-191000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:32:10.355109   17195 config.go:182] Loaded profile config "multinode-969000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:32:10.355164   17195 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:32:10.359614   17195 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 04:32:10.366657   17195 start.go:297] selected driver: qemu2
	I0318 04:32:10.366662   17195 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:32:10.366667   17195 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:32:10.368917   17195 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:32:10.372558   17195 out.go:177] * Automatically selected the socket_vmnet network
	I0318 04:32:10.375727   17195 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 04:32:10.375771   17195 cni.go:84] Creating CNI manager for ""
	I0318 04:32:10.375779   17195 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:32:10.375783   17195 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 04:32:10.375810   17195 start.go:340] cluster config:
	{Name:force-systemd-flag-517000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-517000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:32:10.380423   17195 iso.go:125] acquiring lock: {Name:mkb8143674083e0c7a46a3ed751b3800392bcd24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:32:10.387625   17195 out.go:177] * Starting "force-systemd-flag-517000" primary control-plane node in "force-systemd-flag-517000" cluster
	I0318 04:32:10.391677   17195 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:32:10.391698   17195 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:32:10.391708   17195 cache.go:56] Caching tarball of preloaded images
	I0318 04:32:10.391768   17195 preload.go:173] Found /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:32:10.391775   17195 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:32:10.391852   17195 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/force-systemd-flag-517000/config.json ...
	I0318 04:32:10.391873   17195 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/force-systemd-flag-517000/config.json: {Name:mk7620e8a434ca11305a9a1618c02f46c79e4329 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:32:10.392120   17195 start.go:360] acquireMachinesLock for force-systemd-flag-517000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:32:10.392158   17195 start.go:364] duration metric: took 29.959µs to acquireMachinesLock for "force-systemd-flag-517000"
	I0318 04:32:10.392174   17195 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-517000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-517000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:32:10.392208   17195 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:32:10.395627   17195 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0318 04:32:10.413201   17195 start.go:159] libmachine.API.Create for "force-systemd-flag-517000" (driver="qemu2")
	I0318 04:32:10.413231   17195 client.go:168] LocalClient.Create starting
	I0318 04:32:10.413291   17195 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem
	I0318 04:32:10.413324   17195 main.go:141] libmachine: Decoding PEM data...
	I0318 04:32:10.413335   17195 main.go:141] libmachine: Parsing certificate...
	I0318 04:32:10.413380   17195 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem
	I0318 04:32:10.413404   17195 main.go:141] libmachine: Decoding PEM data...
	I0318 04:32:10.413412   17195 main.go:141] libmachine: Parsing certificate...
	I0318 04:32:10.413840   17195 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18429-15072/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:32:10.555030   17195 main.go:141] libmachine: Creating SSH key...
	I0318 04:32:10.669781   17195 main.go:141] libmachine: Creating Disk image...
	I0318 04:32:10.669788   17195 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:32:10.669949   17195 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/force-systemd-flag-517000/disk.qcow2.raw /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/force-systemd-flag-517000/disk.qcow2
	I0318 04:32:10.682387   17195 main.go:141] libmachine: STDOUT: 
	I0318 04:32:10.682410   17195 main.go:141] libmachine: STDERR: 
	I0318 04:32:10.682456   17195 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/force-systemd-flag-517000/disk.qcow2 +20000M
	I0318 04:32:10.693002   17195 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:32:10.693021   17195 main.go:141] libmachine: STDERR: 
	I0318 04:32:10.693040   17195 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/force-systemd-flag-517000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/force-systemd-flag-517000/disk.qcow2
	I0318 04:32:10.693045   17195 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:32:10.693078   17195 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/force-systemd-flag-517000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/force-systemd-flag-517000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/force-systemd-flag-517000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:25:21:4d:98:9f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/force-systemd-flag-517000/disk.qcow2
	I0318 04:32:10.694881   17195 main.go:141] libmachine: STDOUT: 
	I0318 04:32:10.694895   17195 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:32:10.694915   17195 client.go:171] duration metric: took 281.686208ms to LocalClient.Create
	I0318 04:32:12.695333   17195 start.go:128] duration metric: took 2.303170791s to createHost
	I0318 04:32:12.695402   17195 start.go:83] releasing machines lock for "force-systemd-flag-517000", held for 2.303310958s
	W0318 04:32:12.695466   17195 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:32:12.704629   17195 out.go:177] * Deleting "force-systemd-flag-517000" in qemu2 ...
	W0318 04:32:12.731507   17195 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:32:12.731540   17195 start.go:728] Will try again in 5 seconds ...
	I0318 04:32:17.733545   17195 start.go:360] acquireMachinesLock for force-systemd-flag-517000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:32:19.063360   17195 start.go:364] duration metric: took 1.32975825s to acquireMachinesLock for "force-systemd-flag-517000"
	I0318 04:32:19.063486   17195 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-517000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-flag-517000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:32:19.063887   17195 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:32:19.069548   17195 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0318 04:32:19.115529   17195 start.go:159] libmachine.API.Create for "force-systemd-flag-517000" (driver="qemu2")
	I0318 04:32:19.115590   17195 client.go:168] LocalClient.Create starting
	I0318 04:32:19.115721   17195 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem
	I0318 04:32:19.115786   17195 main.go:141] libmachine: Decoding PEM data...
	I0318 04:32:19.115802   17195 main.go:141] libmachine: Parsing certificate...
	I0318 04:32:19.115870   17195 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem
	I0318 04:32:19.115914   17195 main.go:141] libmachine: Decoding PEM data...
	I0318 04:32:19.115928   17195 main.go:141] libmachine: Parsing certificate...
	I0318 04:32:19.116412   17195 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18429-15072/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:32:19.270468   17195 main.go:141] libmachine: Creating SSH key...
	I0318 04:32:19.299097   17195 main.go:141] libmachine: Creating Disk image...
	I0318 04:32:19.299104   17195 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:32:19.299281   17195 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/force-systemd-flag-517000/disk.qcow2.raw /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/force-systemd-flag-517000/disk.qcow2
	I0318 04:32:19.311761   17195 main.go:141] libmachine: STDOUT: 
	I0318 04:32:19.311788   17195 main.go:141] libmachine: STDERR: 
	I0318 04:32:19.311837   17195 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/force-systemd-flag-517000/disk.qcow2 +20000M
	I0318 04:32:19.322550   17195 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:32:19.322565   17195 main.go:141] libmachine: STDERR: 
	I0318 04:32:19.322583   17195 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/force-systemd-flag-517000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/force-systemd-flag-517000/disk.qcow2
	I0318 04:32:19.322598   17195 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:32:19.322641   17195 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/force-systemd-flag-517000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/force-systemd-flag-517000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/force-systemd-flag-517000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:4d:c5:5e:89:fd -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/force-systemd-flag-517000/disk.qcow2
	I0318 04:32:19.324412   17195 main.go:141] libmachine: STDOUT: 
	I0318 04:32:19.324429   17195 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:32:19.324440   17195 client.go:171] duration metric: took 208.850959ms to LocalClient.Create
	I0318 04:32:21.326615   17195 start.go:128] duration metric: took 2.262736875s to createHost
	I0318 04:32:21.326679   17195 start.go:83] releasing machines lock for "force-systemd-flag-517000", held for 2.263355666s
	W0318 04:32:21.327021   17195 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-517000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-517000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:32:21.341812   17195 out.go:177] 
	W0318 04:32:21.349656   17195 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:32:21.349682   17195 out.go:239] * 
	* 
	W0318 04:32:21.352223   17195 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:32:21.360594   17195 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-517000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-517000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-517000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (81.151459ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-517000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-517000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-517000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-03-18 04:32:21.46077 -0700 PDT m=+822.341342292
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-517000 -n force-systemd-flag-517000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-517000 -n force-systemd-flag-517000: exit status 7 (35.492542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-517000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-517000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-517000
--- FAIL: TestForceSystemdFlag (11.33s)

                                                
                                    
x
+
TestForceSystemdEnv (10.16s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-191000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-191000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.937516875s)

                                                
                                                
-- stdout --
	* [force-systemd-env-191000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-191000" primary control-plane node in "force-systemd-env-191000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-191000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:32:06.506245   17173 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:32:06.506400   17173 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:32:06.506404   17173 out.go:304] Setting ErrFile to fd 2...
	I0318 04:32:06.506406   17173 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:32:06.506533   17173 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:32:06.507584   17173 out.go:298] Setting JSON to false
	I0318 04:32:06.524329   17173 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":9099,"bootTime":1710752427,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:32:06.524399   17173 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:32:06.529201   17173 out.go:177] * [force-systemd-env-191000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:32:06.536141   17173 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 04:32:06.540143   17173 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:32:06.536162   17173 notify.go:220] Checking for updates...
	I0318 04:32:06.546103   17173 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:32:06.549131   17173 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:32:06.552058   17173 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	I0318 04:32:06.555090   17173 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0318 04:32:06.558441   17173 config.go:182] Loaded profile config "multinode-969000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:32:06.558486   17173 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:32:06.563085   17173 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 04:32:06.570134   17173 start.go:297] selected driver: qemu2
	I0318 04:32:06.570138   17173 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:32:06.570142   17173 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:32:06.572418   17173 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:32:06.575119   17173 out.go:177] * Automatically selected the socket_vmnet network
	I0318 04:32:06.578162   17173 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 04:32:06.578197   17173 cni.go:84] Creating CNI manager for ""
	I0318 04:32:06.578204   17173 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:32:06.578207   17173 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 04:32:06.578236   17173 start.go:340] cluster config:
	{Name:force-systemd-env-191000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-191000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:32:06.582373   17173 iso.go:125] acquiring lock: {Name:mkb8143674083e0c7a46a3ed751b3800392bcd24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:32:06.590084   17173 out.go:177] * Starting "force-systemd-env-191000" primary control-plane node in "force-systemd-env-191000" cluster
	I0318 04:32:06.594127   17173 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:32:06.594142   17173 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:32:06.594149   17173 cache.go:56] Caching tarball of preloaded images
	I0318 04:32:06.594197   17173 preload.go:173] Found /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:32:06.594203   17173 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:32:06.594252   17173 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/force-systemd-env-191000/config.json ...
	I0318 04:32:06.594262   17173 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/force-systemd-env-191000/config.json: {Name:mk82cdb498c744a93e081c336d9929b65134c41a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:32:06.594521   17173 start.go:360] acquireMachinesLock for force-systemd-env-191000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:32:06.594556   17173 start.go:364] duration metric: took 25µs to acquireMachinesLock for "force-systemd-env-191000"
	I0318 04:32:06.594569   17173 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-191000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-191000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:32:06.594600   17173 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:32:06.603116   17173 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0318 04:32:06.618001   17173 start.go:159] libmachine.API.Create for "force-systemd-env-191000" (driver="qemu2")
	I0318 04:32:06.618027   17173 client.go:168] LocalClient.Create starting
	I0318 04:32:06.618087   17173 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem
	I0318 04:32:06.618119   17173 main.go:141] libmachine: Decoding PEM data...
	I0318 04:32:06.618135   17173 main.go:141] libmachine: Parsing certificate...
	I0318 04:32:06.618181   17173 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem
	I0318 04:32:06.618209   17173 main.go:141] libmachine: Decoding PEM data...
	I0318 04:32:06.618217   17173 main.go:141] libmachine: Parsing certificate...
	I0318 04:32:06.618660   17173 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18429-15072/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:32:06.757051   17173 main.go:141] libmachine: Creating SSH key...
	I0318 04:32:06.968365   17173 main.go:141] libmachine: Creating Disk image...
	I0318 04:32:06.968376   17173 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:32:06.968591   17173 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/force-systemd-env-191000/disk.qcow2.raw /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/force-systemd-env-191000/disk.qcow2
	I0318 04:32:06.982403   17173 main.go:141] libmachine: STDOUT: 
	I0318 04:32:06.982427   17173 main.go:141] libmachine: STDERR: 
	I0318 04:32:06.982495   17173 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/force-systemd-env-191000/disk.qcow2 +20000M
	I0318 04:32:06.995290   17173 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:32:06.995310   17173 main.go:141] libmachine: STDERR: 
	I0318 04:32:06.995334   17173 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/force-systemd-env-191000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/force-systemd-env-191000/disk.qcow2
	I0318 04:32:06.995337   17173 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:32:06.995388   17173 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/force-systemd-env-191000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/force-systemd-env-191000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/force-systemd-env-191000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:ed:47:96:ec:72 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/force-systemd-env-191000/disk.qcow2
	I0318 04:32:06.997722   17173 main.go:141] libmachine: STDOUT: 
	I0318 04:32:06.997746   17173 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:32:06.997775   17173 client.go:171] duration metric: took 379.756625ms to LocalClient.Create
	I0318 04:32:08.999948   17173 start.go:128] duration metric: took 2.405395041s to createHost
	I0318 04:32:09.000044   17173 start.go:83] releasing machines lock for "force-systemd-env-191000", held for 2.405558916s
	W0318 04:32:09.000184   17173 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:32:09.012220   17173 out.go:177] * Deleting "force-systemd-env-191000" in qemu2 ...
	W0318 04:32:09.040468   17173 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:32:09.040500   17173 start.go:728] Will try again in 5 seconds ...
	I0318 04:32:14.041450   17173 start.go:360] acquireMachinesLock for force-systemd-env-191000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:32:14.041829   17173 start.go:364] duration metric: took 261.292µs to acquireMachinesLock for "force-systemd-env-191000"
	I0318 04:32:14.041973   17173 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-191000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.28.4 ClusterName:force-systemd-env-191000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:32:14.042229   17173 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:32:14.051844   17173 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0318 04:32:14.101776   17173 start.go:159] libmachine.API.Create for "force-systemd-env-191000" (driver="qemu2")
	I0318 04:32:14.101826   17173 client.go:168] LocalClient.Create starting
	I0318 04:32:14.101945   17173 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem
	I0318 04:32:14.102015   17173 main.go:141] libmachine: Decoding PEM data...
	I0318 04:32:14.102033   17173 main.go:141] libmachine: Parsing certificate...
	I0318 04:32:14.102102   17173 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem
	I0318 04:32:14.102148   17173 main.go:141] libmachine: Decoding PEM data...
	I0318 04:32:14.102160   17173 main.go:141] libmachine: Parsing certificate...
	I0318 04:32:14.102673   17173 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18429-15072/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:32:14.257248   17173 main.go:141] libmachine: Creating SSH key...
	I0318 04:32:14.338662   17173 main.go:141] libmachine: Creating Disk image...
	I0318 04:32:14.338668   17173 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:32:14.338847   17173 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/force-systemd-env-191000/disk.qcow2.raw /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/force-systemd-env-191000/disk.qcow2
	I0318 04:32:14.351224   17173 main.go:141] libmachine: STDOUT: 
	I0318 04:32:14.351244   17173 main.go:141] libmachine: STDERR: 
	I0318 04:32:14.351307   17173 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/force-systemd-env-191000/disk.qcow2 +20000M
	I0318 04:32:14.361841   17173 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:32:14.361864   17173 main.go:141] libmachine: STDERR: 
	I0318 04:32:14.361876   17173 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/force-systemd-env-191000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/force-systemd-env-191000/disk.qcow2
	I0318 04:32:14.361881   17173 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:32:14.361909   17173 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/force-systemd-env-191000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/force-systemd-env-191000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/force-systemd-env-191000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:36:49:db:b7:b4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/force-systemd-env-191000/disk.qcow2
	I0318 04:32:14.363636   17173 main.go:141] libmachine: STDOUT: 
	I0318 04:32:14.363652   17173 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:32:14.363667   17173 client.go:171] duration metric: took 261.844334ms to LocalClient.Create
	I0318 04:32:16.365780   17173 start.go:128] duration metric: took 2.323599833s to createHost
	I0318 04:32:16.365844   17173 start.go:83] releasing machines lock for "force-systemd-env-191000", held for 2.32406625s
	W0318 04:32:16.366318   17173 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-191000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-191000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:32:16.376845   17173 out.go:177] 
	W0318 04:32:16.384023   17173 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:32:16.384049   17173 out.go:239] * 
	* 
	W0318 04:32:16.386944   17173 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:32:16.395809   17173 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-191000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-191000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-191000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (79.026458ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-191000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-191000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-191000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-03-18 04:32:16.493599 -0700 PDT m=+817.374006001
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-191000 -n force-systemd-env-191000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-191000 -n force-systemd-env-191000: exit status 7 (34.625459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-191000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-191000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-191000
--- FAIL: TestForceSystemdEnv (10.16s)

                                                
                                    
x
+
TestErrorSpam/setup (9.88s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-742000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-742000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000 --driver=qemu2 : exit status 80 (9.879666209s)

                                                
                                                
-- stdout --
	* [nospam-742000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-742000" primary control-plane node in "nospam-742000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-742000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-742000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-742000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-742000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-742000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
- MINIKUBE_LOCATION=18429
- KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-742000" primary control-plane node in "nospam-742000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-742000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-742000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.88s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (10.27s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-900000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-900000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (10.193772875s)

                                                
                                                
-- stdout --
	* [functional-900000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-900000" primary control-plane node in "functional-900000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-900000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:53115 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:53115 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:53115 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-900000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2232: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-900000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2237: start stdout=* [functional-900000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
- MINIKUBE_LOCATION=18429
- KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-900000" primary control-plane node in "functional-900000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-900000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2242: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:53115 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:53115 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:53115 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-900000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-900000 -n functional-900000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-900000 -n functional-900000: exit status 7 (71.137792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-900000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (10.27s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.26s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-900000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-900000 --alsologtostderr -v=8: exit status 80 (5.188507792s)

                                                
                                                
-- stdout --
	* [functional-900000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-900000" primary control-plane node in "functional-900000" cluster
	* Restarting existing qemu2 VM for "functional-900000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-900000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:20:48.074558   15785 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:20:48.074686   15785 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:20:48.074689   15785 out.go:304] Setting ErrFile to fd 2...
	I0318 04:20:48.074695   15785 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:20:48.074822   15785 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:20:48.075808   15785 out.go:298] Setting JSON to false
	I0318 04:20:48.091997   15785 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8421,"bootTime":1710752427,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:20:48.092057   15785 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:20:48.095617   15785 out.go:177] * [functional-900000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:20:48.103540   15785 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 04:20:48.103594   15785 notify.go:220] Checking for updates...
	I0318 04:20:48.107565   15785 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:20:48.110524   15785 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:20:48.114525   15785 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:20:48.117500   15785 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	I0318 04:20:48.120573   15785 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:20:48.123834   15785 config.go:182] Loaded profile config "functional-900000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:20:48.123899   15785 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:20:48.128485   15785 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 04:20:48.135504   15785 start.go:297] selected driver: qemu2
	I0318 04:20:48.135509   15785 start.go:901] validating driver "qemu2" against &{Name:functional-900000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:functional-900000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:20:48.135569   15785 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:20:48.137844   15785 cni.go:84] Creating CNI manager for ""
	I0318 04:20:48.137861   15785 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:20:48.137903   15785 start.go:340] cluster config:
	{Name:functional-900000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-900000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:20:48.142235   15785 iso.go:125] acquiring lock: {Name:mkb8143674083e0c7a46a3ed751b3800392bcd24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:20:48.150535   15785 out.go:177] * Starting "functional-900000" primary control-plane node in "functional-900000" cluster
	I0318 04:20:48.154503   15785 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:20:48.154519   15785 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:20:48.154537   15785 cache.go:56] Caching tarball of preloaded images
	I0318 04:20:48.154592   15785 preload.go:173] Found /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:20:48.154600   15785 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:20:48.154671   15785 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/functional-900000/config.json ...
	I0318 04:20:48.155165   15785 start.go:360] acquireMachinesLock for functional-900000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:20:48.155193   15785 start.go:364] duration metric: took 21.333µs to acquireMachinesLock for "functional-900000"
	I0318 04:20:48.155202   15785 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:20:48.155206   15785 fix.go:54] fixHost starting: 
	I0318 04:20:48.155326   15785 fix.go:112] recreateIfNeeded on functional-900000: state=Stopped err=<nil>
	W0318 04:20:48.155335   15785 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:20:48.163466   15785 out.go:177] * Restarting existing qemu2 VM for "functional-900000" ...
	I0318 04:20:48.167512   15785 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/functional-900000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/functional-900000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/functional-900000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:42:d1:c7:a8:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/functional-900000/disk.qcow2
	I0318 04:20:48.169668   15785 main.go:141] libmachine: STDOUT: 
	I0318 04:20:48.169694   15785 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:20:48.169726   15785 fix.go:56] duration metric: took 14.519167ms for fixHost
	I0318 04:20:48.169731   15785 start.go:83] releasing machines lock for "functional-900000", held for 14.5355ms
	W0318 04:20:48.169740   15785 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:20:48.169775   15785 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:20:48.169781   15785 start.go:728] Will try again in 5 seconds ...
	I0318 04:20:53.171913   15785 start.go:360] acquireMachinesLock for functional-900000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:20:53.172337   15785 start.go:364] duration metric: took 305.5µs to acquireMachinesLock for "functional-900000"
	I0318 04:20:53.172465   15785 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:20:53.172485   15785 fix.go:54] fixHost starting: 
	I0318 04:20:53.173420   15785 fix.go:112] recreateIfNeeded on functional-900000: state=Stopped err=<nil>
	W0318 04:20:53.173454   15785 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:20:53.177843   15785 out.go:177] * Restarting existing qemu2 VM for "functional-900000" ...
	I0318 04:20:53.186045   15785 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/functional-900000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/functional-900000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/functional-900000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:42:d1:c7:a8:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/functional-900000/disk.qcow2
	I0318 04:20:53.196034   15785 main.go:141] libmachine: STDOUT: 
	I0318 04:20:53.196110   15785 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:20:53.196203   15785 fix.go:56] duration metric: took 23.719917ms for fixHost
	I0318 04:20:53.196222   15785 start.go:83] releasing machines lock for "functional-900000", held for 23.863292ms
	W0318 04:20:53.196403   15785 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-900000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-900000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:20:53.203849   15785 out.go:177] 
	W0318 04:20:53.206919   15785 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:20:53.206943   15785 out.go:239] * 
	* 
	W0318 04:20:53.209287   15785 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:20:53.216812   15785 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-900000 --alsologtostderr -v=8": exit status 80
functional_test.go:659: soft start took 5.190214958s for "functional-900000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-900000 -n functional-900000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-900000 -n functional-900000: exit status 7 (66.822208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-900000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.26s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
functional_test.go:677: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (29.618708ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:679: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:683: expected current-context = "functional-900000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-900000 -n functional-900000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-900000 -n functional-900000: exit status 7 (30.926ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-900000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-900000 get po -A
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-900000 get po -A: exit status 1 (25.611ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-900000

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-900000 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-900000\n"*: args "kubectl --context functional-900000 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-900000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-900000 -n functional-900000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-900000 -n functional-900000: exit status 7 (31.096709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-900000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 ssh sudo crictl images: exit status 83 (45.897875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
functional_test.go:1122: failed to get images by "out/minikube-darwin-arm64 -p functional-900000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1126: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (44.641166ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
functional_test.go:1146: failed to manually delete image "out/minikube-darwin-arm64 -p functional-900000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (43.943583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (44.81975ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
functional_test.go:1161: expected "out/minikube-darwin-arm64 -p functional-900000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 kubectl -- --context functional-900000 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 kubectl -- --context functional-900000 get pods: exit status 1 (523.150417ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-900000
	* no server found for cluster "functional-900000"

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-darwin-arm64 -p functional-900000 kubectl -- --context functional-900000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-900000 -n functional-900000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-900000 -n functional-900000: exit status 7 (34.099584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-900000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.56s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.72s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-900000 get pods
functional_test.go:737: (dbg) Non-zero exit: out/kubectl --context functional-900000 get pods: exit status 1 (686.301833ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-900000
	* no server found for cluster "functional-900000"

                                                
                                                
** /stderr **
functional_test.go:740: failed to run kubectl directly. args "out/kubectl --context functional-900000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-900000 -n functional-900000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-900000 -n functional-900000: exit status 7 (31.868667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-900000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.72s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.27s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-900000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-900000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.200490375s)

                                                
                                                
-- stdout --
	* [functional-900000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-900000" primary control-plane node in "functional-900000" cluster
	* Restarting existing qemu2 VM for "functional-900000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-900000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-900000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-900000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:757: restart took 5.200960625s for "functional-900000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-900000 -n functional-900000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-900000 -n functional-900000: exit status 7 (72.620917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-900000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.27s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-900000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-900000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (29.658625ms)

                                                
                                                
** stderr ** 
	error: context "functional-900000" does not exist

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-900000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-900000 -n functional-900000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-900000 -n functional-900000: exit status 7 (32.353125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-900000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 logs
functional_test.go:1232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 logs: exit status 83 (77.824917ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-382000 | jenkins | v1.32.0 | 18 Mar 24 04:18 PDT |                     |
	|         | -p download-only-382000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 18 Mar 24 04:19 PDT | 18 Mar 24 04:19 PDT |
	| delete  | -p download-only-382000                                                  | download-only-382000 | jenkins | v1.32.0 | 18 Mar 24 04:19 PDT | 18 Mar 24 04:19 PDT |
	| start   | -o=json --download-only                                                  | download-only-509000 | jenkins | v1.32.0 | 18 Mar 24 04:19 PDT |                     |
	|         | -p download-only-509000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 18 Mar 24 04:19 PDT | 18 Mar 24 04:19 PDT |
	| delete  | -p download-only-509000                                                  | download-only-509000 | jenkins | v1.32.0 | 18 Mar 24 04:19 PDT | 18 Mar 24 04:19 PDT |
	| start   | -o=json --download-only                                                  | download-only-180000 | jenkins | v1.32.0 | 18 Mar 24 04:19 PDT |                     |
	|         | -p download-only-180000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                                        |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT | 18 Mar 24 04:20 PDT |
	| delete  | -p download-only-180000                                                  | download-only-180000 | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT | 18 Mar 24 04:20 PDT |
	| delete  | -p download-only-382000                                                  | download-only-382000 | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT | 18 Mar 24 04:20 PDT |
	| delete  | -p download-only-509000                                                  | download-only-509000 | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT | 18 Mar 24 04:20 PDT |
	| delete  | -p download-only-180000                                                  | download-only-180000 | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT | 18 Mar 24 04:20 PDT |
	| start   | --download-only -p                                                       | binary-mirror-986000 | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT |                     |
	|         | binary-mirror-986000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:53083                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-986000                                                  | binary-mirror-986000 | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT | 18 Mar 24 04:20 PDT |
	| addons  | enable dashboard -p                                                      | addons-118000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT |                     |
	|         | addons-118000                                                            |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-118000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT |                     |
	|         | addons-118000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-118000 --wait=true                                             | addons-118000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=qemu2                                             |                      |         |         |                     |                     |
	|         |  --addons=ingress                                                        |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	| delete  | -p addons-118000                                                         | addons-118000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT | 18 Mar 24 04:20 PDT |
	| start   | -p nospam-742000 -n=1 --memory=2250 --wait=false                         | nospam-742000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT |                     |
	|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-742000 --log_dir                                                  | nospam-742000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-742000 --log_dir                                                  | nospam-742000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-742000 --log_dir                                                  | nospam-742000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-742000 --log_dir                                                  | nospam-742000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-742000 --log_dir                                                  | nospam-742000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-742000 --log_dir                                                  | nospam-742000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-742000 --log_dir                                                  | nospam-742000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-742000 --log_dir                                                  | nospam-742000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-742000 --log_dir                                                  | nospam-742000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-742000 --log_dir                                                  | nospam-742000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT | 18 Mar 24 04:20 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-742000 --log_dir                                                  | nospam-742000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT | 18 Mar 24 04:20 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-742000 --log_dir                                                  | nospam-742000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT | 18 Mar 24 04:20 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-742000                                                         | nospam-742000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT | 18 Mar 24 04:20 PDT |
	| start   | -p functional-900000                                                     | functional-900000    | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-900000                                                     | functional-900000    | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-900000 cache add                                              | functional-900000    | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT | 18 Mar 24 04:20 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-900000 cache add                                              | functional-900000    | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT | 18 Mar 24 04:20 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-900000 cache add                                              | functional-900000    | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT | 18 Mar 24 04:20 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-900000 cache add                                              | functional-900000    | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT | 18 Mar 24 04:21 PDT |
	|         | minikube-local-cache-test:functional-900000                              |                      |         |         |                     |                     |
	| cache   | functional-900000 cache delete                                           | functional-900000    | jenkins | v1.32.0 | 18 Mar 24 04:21 PDT | 18 Mar 24 04:21 PDT |
	|         | minikube-local-cache-test:functional-900000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 18 Mar 24 04:21 PDT | 18 Mar 24 04:21 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.32.0 | 18 Mar 24 04:21 PDT | 18 Mar 24 04:21 PDT |
	| ssh     | functional-900000 ssh sudo                                               | functional-900000    | jenkins | v1.32.0 | 18 Mar 24 04:21 PDT |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-900000                                                        | functional-900000    | jenkins | v1.32.0 | 18 Mar 24 04:21 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-900000 ssh                                                    | functional-900000    | jenkins | v1.32.0 | 18 Mar 24 04:21 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-900000 cache reload                                           | functional-900000    | jenkins | v1.32.0 | 18 Mar 24 04:21 PDT | 18 Mar 24 04:21 PDT |
	| ssh     | functional-900000 ssh                                                    | functional-900000    | jenkins | v1.32.0 | 18 Mar 24 04:21 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 18 Mar 24 04:21 PDT | 18 Mar 24 04:21 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 18 Mar 24 04:21 PDT | 18 Mar 24 04:21 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-900000 kubectl --                                             | functional-900000    | jenkins | v1.32.0 | 18 Mar 24 04:21 PDT |                     |
	|         | --context functional-900000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-900000                                                     | functional-900000    | jenkins | v1.32.0 | 18 Mar 24 04:21 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 04:21:02
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 04:21:02.333139   15870 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:21:02.333283   15870 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:21:02.333285   15870 out.go:304] Setting ErrFile to fd 2...
	I0318 04:21:02.333287   15870 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:21:02.333413   15870 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:21:02.334437   15870 out.go:298] Setting JSON to false
	I0318 04:21:02.350372   15870 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8435,"bootTime":1710752427,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:21:02.350432   15870 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:21:02.355322   15870 out.go:177] * [functional-900000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:21:02.365327   15870 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 04:21:02.365368   15870 notify.go:220] Checking for updates...
	I0318 04:21:02.368404   15870 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:21:02.371323   15870 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:21:02.375265   15870 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:21:02.378387   15870 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	I0318 04:21:02.381282   15870 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:21:02.384722   15870 config.go:182] Loaded profile config "functional-900000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:21:02.384770   15870 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:21:02.389301   15870 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 04:21:02.396299   15870 start.go:297] selected driver: qemu2
	I0318 04:21:02.396302   15870 start.go:901] validating driver "qemu2" against &{Name:functional-900000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:functional-900000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:21:02.396354   15870 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:21:02.398629   15870 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:21:02.398682   15870 cni.go:84] Creating CNI manager for ""
	I0318 04:21:02.398689   15870 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:21:02.398739   15870 start.go:340] cluster config:
	{Name:functional-900000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-900000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:21:02.403123   15870 iso.go:125] acquiring lock: {Name:mkb8143674083e0c7a46a3ed751b3800392bcd24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:21:02.411301   15870 out.go:177] * Starting "functional-900000" primary control-plane node in "functional-900000" cluster
	I0318 04:21:02.415281   15870 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:21:02.415296   15870 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:21:02.415304   15870 cache.go:56] Caching tarball of preloaded images
	I0318 04:21:02.415369   15870 preload.go:173] Found /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:21:02.415375   15870 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:21:02.415441   15870 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/functional-900000/config.json ...
	I0318 04:21:02.415895   15870 start.go:360] acquireMachinesLock for functional-900000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:21:02.415928   15870 start.go:364] duration metric: took 28.125µs to acquireMachinesLock for "functional-900000"
	I0318 04:21:02.415936   15870 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:21:02.415939   15870 fix.go:54] fixHost starting: 
	I0318 04:21:02.416060   15870 fix.go:112] recreateIfNeeded on functional-900000: state=Stopped err=<nil>
	W0318 04:21:02.416067   15870 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:21:02.423276   15870 out.go:177] * Restarting existing qemu2 VM for "functional-900000" ...
	I0318 04:21:02.427387   15870 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/functional-900000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/functional-900000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/functional-900000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:42:d1:c7:a8:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/functional-900000/disk.qcow2
	I0318 04:21:02.429646   15870 main.go:141] libmachine: STDOUT: 
	I0318 04:21:02.429670   15870 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:21:02.429701   15870 fix.go:56] duration metric: took 13.762ms for fixHost
	I0318 04:21:02.429704   15870 start.go:83] releasing machines lock for "functional-900000", held for 13.773542ms
	W0318 04:21:02.429713   15870 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:21:02.429748   15870 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:21:02.429753   15870 start.go:728] Will try again in 5 seconds ...
	I0318 04:21:07.431803   15870 start.go:360] acquireMachinesLock for functional-900000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:21:07.432228   15870 start.go:364] duration metric: took 357.291µs to acquireMachinesLock for "functional-900000"
	I0318 04:21:07.432410   15870 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:21:07.432426   15870 fix.go:54] fixHost starting: 
	I0318 04:21:07.433214   15870 fix.go:112] recreateIfNeeded on functional-900000: state=Stopped err=<nil>
	W0318 04:21:07.433231   15870 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:21:07.437592   15870 out.go:177] * Restarting existing qemu2 VM for "functional-900000" ...
	I0318 04:21:07.445697   15870 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/functional-900000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/functional-900000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/functional-900000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:42:d1:c7:a8:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/functional-900000/disk.qcow2
	I0318 04:21:07.455954   15870 main.go:141] libmachine: STDOUT: 
	I0318 04:21:07.455999   15870 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:21:07.456069   15870 fix.go:56] duration metric: took 23.646917ms for fixHost
	I0318 04:21:07.456079   15870 start.go:83] releasing machines lock for "functional-900000", held for 23.832042ms
	W0318 04:21:07.456233   15870 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-900000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:21:07.464580   15870 out.go:177] 
	W0318 04:21:07.480665   15870 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:21:07.480712   15870 out.go:239] * 
	W0318 04:21:07.483339   15870 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:21:07.491576   15870 out.go:177] 
	
	
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
functional_test.go:1234: out/minikube-darwin-arm64 -p functional-900000 logs failed: exit status 83
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-382000 | jenkins | v1.32.0 | 18 Mar 24 04:18 PDT |                     |
|         | -p download-only-382000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 18 Mar 24 04:19 PDT | 18 Mar 24 04:19 PDT |
| delete  | -p download-only-382000                                                  | download-only-382000 | jenkins | v1.32.0 | 18 Mar 24 04:19 PDT | 18 Mar 24 04:19 PDT |
| start   | -o=json --download-only                                                  | download-only-509000 | jenkins | v1.32.0 | 18 Mar 24 04:19 PDT |                     |
|         | -p download-only-509000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.28.4                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 18 Mar 24 04:19 PDT | 18 Mar 24 04:19 PDT |
| delete  | -p download-only-509000                                                  | download-only-509000 | jenkins | v1.32.0 | 18 Mar 24 04:19 PDT | 18 Mar 24 04:19 PDT |
| start   | -o=json --download-only                                                  | download-only-180000 | jenkins | v1.32.0 | 18 Mar 24 04:19 PDT |                     |
|         | -p download-only-180000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.29.0-rc.2                                        |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT | 18 Mar 24 04:20 PDT |
| delete  | -p download-only-180000                                                  | download-only-180000 | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT | 18 Mar 24 04:20 PDT |
| delete  | -p download-only-382000                                                  | download-only-382000 | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT | 18 Mar 24 04:20 PDT |
| delete  | -p download-only-509000                                                  | download-only-509000 | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT | 18 Mar 24 04:20 PDT |
| delete  | -p download-only-180000                                                  | download-only-180000 | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT | 18 Mar 24 04:20 PDT |
| start   | --download-only -p                                                       | binary-mirror-986000 | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT |                     |
|         | binary-mirror-986000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:53083                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-986000                                                  | binary-mirror-986000 | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT | 18 Mar 24 04:20 PDT |
| addons  | enable dashboard -p                                                      | addons-118000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT |                     |
|         | addons-118000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-118000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT |                     |
|         | addons-118000                                                            |                      |         |         |                     |                     |
| start   | -p addons-118000 --wait=true                                             | addons-118000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --driver=qemu2                                             |                      |         |         |                     |                     |
|         |  --addons=ingress                                                        |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-118000                                                         | addons-118000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT | 18 Mar 24 04:20 PDT |
| start   | -p nospam-742000 -n=1 --memory=2250 --wait=false                         | nospam-742000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT |                     |
|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-742000 --log_dir                                                  | nospam-742000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-742000 --log_dir                                                  | nospam-742000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-742000 --log_dir                                                  | nospam-742000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-742000 --log_dir                                                  | nospam-742000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-742000 --log_dir                                                  | nospam-742000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-742000 --log_dir                                                  | nospam-742000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-742000 --log_dir                                                  | nospam-742000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-742000 --log_dir                                                  | nospam-742000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-742000 --log_dir                                                  | nospam-742000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-742000 --log_dir                                                  | nospam-742000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT | 18 Mar 24 04:20 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-742000 --log_dir                                                  | nospam-742000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT | 18 Mar 24 04:20 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-742000 --log_dir                                                  | nospam-742000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT | 18 Mar 24 04:20 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-742000                                                         | nospam-742000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT | 18 Mar 24 04:20 PDT |
| start   | -p functional-900000                                                     | functional-900000    | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-900000                                                     | functional-900000    | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-900000 cache add                                              | functional-900000    | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT | 18 Mar 24 04:20 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-900000 cache add                                              | functional-900000    | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT | 18 Mar 24 04:20 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-900000 cache add                                              | functional-900000    | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT | 18 Mar 24 04:20 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-900000 cache add                                              | functional-900000    | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT | 18 Mar 24 04:21 PDT |
|         | minikube-local-cache-test:functional-900000                              |                      |         |         |                     |                     |
| cache   | functional-900000 cache delete                                           | functional-900000    | jenkins | v1.32.0 | 18 Mar 24 04:21 PDT | 18 Mar 24 04:21 PDT |
|         | minikube-local-cache-test:functional-900000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 18 Mar 24 04:21 PDT | 18 Mar 24 04:21 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.32.0 | 18 Mar 24 04:21 PDT | 18 Mar 24 04:21 PDT |
| ssh     | functional-900000 ssh sudo                                               | functional-900000    | jenkins | v1.32.0 | 18 Mar 24 04:21 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-900000                                                        | functional-900000    | jenkins | v1.32.0 | 18 Mar 24 04:21 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-900000 ssh                                                    | functional-900000    | jenkins | v1.32.0 | 18 Mar 24 04:21 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-900000 cache reload                                           | functional-900000    | jenkins | v1.32.0 | 18 Mar 24 04:21 PDT | 18 Mar 24 04:21 PDT |
| ssh     | functional-900000 ssh                                                    | functional-900000    | jenkins | v1.32.0 | 18 Mar 24 04:21 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 18 Mar 24 04:21 PDT | 18 Mar 24 04:21 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 18 Mar 24 04:21 PDT | 18 Mar 24 04:21 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-900000 kubectl --                                             | functional-900000    | jenkins | v1.32.0 | 18 Mar 24 04:21 PDT |                     |
|         | --context functional-900000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-900000                                                     | functional-900000    | jenkins | v1.32.0 | 18 Mar 24 04:21 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/03/18 04:21:02
Running on machine: MacOS-M1-Agent-2
Binary: Built with gc go1.22.1 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0318 04:21:02.333139   15870 out.go:291] Setting OutFile to fd 1 ...
I0318 04:21:02.333283   15870 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 04:21:02.333285   15870 out.go:304] Setting ErrFile to fd 2...
I0318 04:21:02.333287   15870 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 04:21:02.333413   15870 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
I0318 04:21:02.334437   15870 out.go:298] Setting JSON to false
I0318 04:21:02.350372   15870 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8435,"bootTime":1710752427,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
W0318 04:21:02.350432   15870 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0318 04:21:02.355322   15870 out.go:177] * [functional-900000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
I0318 04:21:02.365327   15870 out.go:177]   - MINIKUBE_LOCATION=18429
I0318 04:21:02.365368   15870 notify.go:220] Checking for updates...
I0318 04:21:02.368404   15870 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
I0318 04:21:02.371323   15870 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0318 04:21:02.375265   15870 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0318 04:21:02.378387   15870 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
I0318 04:21:02.381282   15870 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0318 04:21:02.384722   15870 config.go:182] Loaded profile config "functional-900000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 04:21:02.384770   15870 driver.go:392] Setting default libvirt URI to qemu:///system
I0318 04:21:02.389301   15870 out.go:177] * Using the qemu2 driver based on existing profile
I0318 04:21:02.396299   15870 start.go:297] selected driver: qemu2
I0318 04:21:02.396302   15870 start.go:901] validating driver "qemu2" against &{Name:functional-900000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:functional-900000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0318 04:21:02.396354   15870 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0318 04:21:02.398629   15870 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0318 04:21:02.398682   15870 cni.go:84] Creating CNI manager for ""
I0318 04:21:02.398689   15870 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0318 04:21:02.398739   15870 start.go:340] cluster config:
{Name:functional-900000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-900000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0318 04:21:02.403123   15870 iso.go:125] acquiring lock: {Name:mkb8143674083e0c7a46a3ed751b3800392bcd24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0318 04:21:02.411301   15870 out.go:177] * Starting "functional-900000" primary control-plane node in "functional-900000" cluster
I0318 04:21:02.415281   15870 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
I0318 04:21:02.415296   15870 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
I0318 04:21:02.415304   15870 cache.go:56] Caching tarball of preloaded images
I0318 04:21:02.415369   15870 preload.go:173] Found /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0318 04:21:02.415375   15870 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
I0318 04:21:02.415441   15870 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/functional-900000/config.json ...
I0318 04:21:02.415895   15870 start.go:360] acquireMachinesLock for functional-900000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0318 04:21:02.415928   15870 start.go:364] duration metric: took 28.125µs to acquireMachinesLock for "functional-900000"
I0318 04:21:02.415936   15870 start.go:96] Skipping create...Using existing machine configuration
I0318 04:21:02.415939   15870 fix.go:54] fixHost starting: 
I0318 04:21:02.416060   15870 fix.go:112] recreateIfNeeded on functional-900000: state=Stopped err=<nil>
W0318 04:21:02.416067   15870 fix.go:138] unexpected machine state, will restart: <nil>
I0318 04:21:02.423276   15870 out.go:177] * Restarting existing qemu2 VM for "functional-900000" ...
I0318 04:21:02.427387   15870 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/functional-900000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/functional-900000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/functional-900000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:42:d1:c7:a8:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/functional-900000/disk.qcow2
I0318 04:21:02.429646   15870 main.go:141] libmachine: STDOUT: 
I0318 04:21:02.429670   15870 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0318 04:21:02.429701   15870 fix.go:56] duration metric: took 13.762ms for fixHost
I0318 04:21:02.429704   15870 start.go:83] releasing machines lock for "functional-900000", held for 13.773542ms
W0318 04:21:02.429713   15870 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0318 04:21:02.429748   15870 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0318 04:21:02.429753   15870 start.go:728] Will try again in 5 seconds ...
I0318 04:21:07.431803   15870 start.go:360] acquireMachinesLock for functional-900000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0318 04:21:07.432228   15870 start.go:364] duration metric: took 357.291µs to acquireMachinesLock for "functional-900000"
I0318 04:21:07.432410   15870 start.go:96] Skipping create...Using existing machine configuration
I0318 04:21:07.432426   15870 fix.go:54] fixHost starting: 
I0318 04:21:07.433214   15870 fix.go:112] recreateIfNeeded on functional-900000: state=Stopped err=<nil>
W0318 04:21:07.433231   15870 fix.go:138] unexpected machine state, will restart: <nil>
I0318 04:21:07.437592   15870 out.go:177] * Restarting existing qemu2 VM for "functional-900000" ...
I0318 04:21:07.445697   15870 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/functional-900000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/functional-900000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/functional-900000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:42:d1:c7:a8:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/functional-900000/disk.qcow2
I0318 04:21:07.455954   15870 main.go:141] libmachine: STDOUT: 
I0318 04:21:07.455999   15870 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0318 04:21:07.456069   15870 fix.go:56] duration metric: took 23.646917ms for fixHost
I0318 04:21:07.456079   15870 start.go:83] releasing machines lock for "functional-900000", held for 23.832042ms
W0318 04:21:07.456233   15870 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-900000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0318 04:21:07.464580   15870 out.go:177] 
W0318 04:21:07.480665   15870 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0318 04:21:07.480712   15870 out.go:239] * 
W0318 04:21:07.483339   15870 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0318 04:21:07.491576   15870 out.go:177] 

                                                
                                                

                                                
                                                
* The control-plane node functional-900000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-900000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd4069429205/001/logs.txt
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-382000 | jenkins | v1.32.0 | 18 Mar 24 04:18 PDT |                     |
|         | -p download-only-382000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 18 Mar 24 04:19 PDT | 18 Mar 24 04:19 PDT |
| delete  | -p download-only-382000                                                  | download-only-382000 | jenkins | v1.32.0 | 18 Mar 24 04:19 PDT | 18 Mar 24 04:19 PDT |
| start   | -o=json --download-only                                                  | download-only-509000 | jenkins | v1.32.0 | 18 Mar 24 04:19 PDT |                     |
|         | -p download-only-509000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.28.4                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 18 Mar 24 04:19 PDT | 18 Mar 24 04:19 PDT |
| delete  | -p download-only-509000                                                  | download-only-509000 | jenkins | v1.32.0 | 18 Mar 24 04:19 PDT | 18 Mar 24 04:19 PDT |
| start   | -o=json --download-only                                                  | download-only-180000 | jenkins | v1.32.0 | 18 Mar 24 04:19 PDT |                     |
|         | -p download-only-180000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.29.0-rc.2                                        |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT | 18 Mar 24 04:20 PDT |
| delete  | -p download-only-180000                                                  | download-only-180000 | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT | 18 Mar 24 04:20 PDT |
| delete  | -p download-only-382000                                                  | download-only-382000 | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT | 18 Mar 24 04:20 PDT |
| delete  | -p download-only-509000                                                  | download-only-509000 | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT | 18 Mar 24 04:20 PDT |
| delete  | -p download-only-180000                                                  | download-only-180000 | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT | 18 Mar 24 04:20 PDT |
| start   | --download-only -p                                                       | binary-mirror-986000 | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT |                     |
|         | binary-mirror-986000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:53083                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-986000                                                  | binary-mirror-986000 | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT | 18 Mar 24 04:20 PDT |
| addons  | enable dashboard -p                                                      | addons-118000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT |                     |
|         | addons-118000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-118000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT |                     |
|         | addons-118000                                                            |                      |         |         |                     |                     |
| start   | -p addons-118000 --wait=true                                             | addons-118000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --driver=qemu2                                             |                      |         |         |                     |                     |
|         |  --addons=ingress                                                        |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-118000                                                         | addons-118000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT | 18 Mar 24 04:20 PDT |
| start   | -p nospam-742000 -n=1 --memory=2250 --wait=false                         | nospam-742000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT |                     |
|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-742000 --log_dir                                                  | nospam-742000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-742000 --log_dir                                                  | nospam-742000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-742000 --log_dir                                                  | nospam-742000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-742000 --log_dir                                                  | nospam-742000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-742000 --log_dir                                                  | nospam-742000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-742000 --log_dir                                                  | nospam-742000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-742000 --log_dir                                                  | nospam-742000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-742000 --log_dir                                                  | nospam-742000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-742000 --log_dir                                                  | nospam-742000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-742000 --log_dir                                                  | nospam-742000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT | 18 Mar 24 04:20 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-742000 --log_dir                                                  | nospam-742000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT | 18 Mar 24 04:20 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-742000 --log_dir                                                  | nospam-742000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT | 18 Mar 24 04:20 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-742000                                                         | nospam-742000        | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT | 18 Mar 24 04:20 PDT |
| start   | -p functional-900000                                                     | functional-900000    | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-900000                                                     | functional-900000    | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-900000 cache add                                              | functional-900000    | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT | 18 Mar 24 04:20 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-900000 cache add                                              | functional-900000    | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT | 18 Mar 24 04:20 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-900000 cache add                                              | functional-900000    | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT | 18 Mar 24 04:20 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-900000 cache add                                              | functional-900000    | jenkins | v1.32.0 | 18 Mar 24 04:20 PDT | 18 Mar 24 04:21 PDT |
|         | minikube-local-cache-test:functional-900000                              |                      |         |         |                     |                     |
| cache   | functional-900000 cache delete                                           | functional-900000    | jenkins | v1.32.0 | 18 Mar 24 04:21 PDT | 18 Mar 24 04:21 PDT |
|         | minikube-local-cache-test:functional-900000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 18 Mar 24 04:21 PDT | 18 Mar 24 04:21 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.32.0 | 18 Mar 24 04:21 PDT | 18 Mar 24 04:21 PDT |
| ssh     | functional-900000 ssh sudo                                               | functional-900000    | jenkins | v1.32.0 | 18 Mar 24 04:21 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-900000                                                        | functional-900000    | jenkins | v1.32.0 | 18 Mar 24 04:21 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-900000 ssh                                                    | functional-900000    | jenkins | v1.32.0 | 18 Mar 24 04:21 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-900000 cache reload                                           | functional-900000    | jenkins | v1.32.0 | 18 Mar 24 04:21 PDT | 18 Mar 24 04:21 PDT |
| ssh     | functional-900000 ssh                                                    | functional-900000    | jenkins | v1.32.0 | 18 Mar 24 04:21 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 18 Mar 24 04:21 PDT | 18 Mar 24 04:21 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 18 Mar 24 04:21 PDT | 18 Mar 24 04:21 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-900000 kubectl --                                             | functional-900000    | jenkins | v1.32.0 | 18 Mar 24 04:21 PDT |                     |
|         | --context functional-900000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-900000                                                     | functional-900000    | jenkins | v1.32.0 | 18 Mar 24 04:21 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/03/18 04:21:02
Running on machine: MacOS-M1-Agent-2
Binary: Built with gc go1.22.1 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0318 04:21:02.333139   15870 out.go:291] Setting OutFile to fd 1 ...
I0318 04:21:02.333283   15870 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 04:21:02.333285   15870 out.go:304] Setting ErrFile to fd 2...
I0318 04:21:02.333287   15870 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 04:21:02.333413   15870 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
I0318 04:21:02.334437   15870 out.go:298] Setting JSON to false
I0318 04:21:02.350372   15870 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8435,"bootTime":1710752427,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
W0318 04:21:02.350432   15870 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0318 04:21:02.355322   15870 out.go:177] * [functional-900000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
I0318 04:21:02.365327   15870 out.go:177]   - MINIKUBE_LOCATION=18429
I0318 04:21:02.365368   15870 notify.go:220] Checking for updates...
I0318 04:21:02.368404   15870 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
I0318 04:21:02.371323   15870 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0318 04:21:02.375265   15870 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0318 04:21:02.378387   15870 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
I0318 04:21:02.381282   15870 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0318 04:21:02.384722   15870 config.go:182] Loaded profile config "functional-900000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 04:21:02.384770   15870 driver.go:392] Setting default libvirt URI to qemu:///system
I0318 04:21:02.389301   15870 out.go:177] * Using the qemu2 driver based on existing profile
I0318 04:21:02.396299   15870 start.go:297] selected driver: qemu2
I0318 04:21:02.396302   15870 start.go:901] validating driver "qemu2" against &{Name:functional-900000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:functional-900000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0318 04:21:02.396354   15870 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0318 04:21:02.398629   15870 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0318 04:21:02.398682   15870 cni.go:84] Creating CNI manager for ""
I0318 04:21:02.398689   15870 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0318 04:21:02.398739   15870 start.go:340] cluster config:
{Name:functional-900000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-900000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0318 04:21:02.403123   15870 iso.go:125] acquiring lock: {Name:mkb8143674083e0c7a46a3ed751b3800392bcd24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0318 04:21:02.411301   15870 out.go:177] * Starting "functional-900000" primary control-plane node in "functional-900000" cluster
I0318 04:21:02.415281   15870 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
I0318 04:21:02.415296   15870 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
I0318 04:21:02.415304   15870 cache.go:56] Caching tarball of preloaded images
I0318 04:21:02.415369   15870 preload.go:173] Found /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0318 04:21:02.415375   15870 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
I0318 04:21:02.415441   15870 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/functional-900000/config.json ...
I0318 04:21:02.415895   15870 start.go:360] acquireMachinesLock for functional-900000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0318 04:21:02.415928   15870 start.go:364] duration metric: took 28.125µs to acquireMachinesLock for "functional-900000"
I0318 04:21:02.415936   15870 start.go:96] Skipping create...Using existing machine configuration
I0318 04:21:02.415939   15870 fix.go:54] fixHost starting: 
I0318 04:21:02.416060   15870 fix.go:112] recreateIfNeeded on functional-900000: state=Stopped err=<nil>
W0318 04:21:02.416067   15870 fix.go:138] unexpected machine state, will restart: <nil>
I0318 04:21:02.423276   15870 out.go:177] * Restarting existing qemu2 VM for "functional-900000" ...
I0318 04:21:02.427387   15870 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/functional-900000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/functional-900000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/functional-900000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:42:d1:c7:a8:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/functional-900000/disk.qcow2
I0318 04:21:02.429646   15870 main.go:141] libmachine: STDOUT: 
I0318 04:21:02.429670   15870 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0318 04:21:02.429701   15870 fix.go:56] duration metric: took 13.762ms for fixHost
I0318 04:21:02.429704   15870 start.go:83] releasing machines lock for "functional-900000", held for 13.773542ms
W0318 04:21:02.429713   15870 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0318 04:21:02.429748   15870 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0318 04:21:02.429753   15870 start.go:728] Will try again in 5 seconds ...
I0318 04:21:07.431803   15870 start.go:360] acquireMachinesLock for functional-900000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0318 04:21:07.432228   15870 start.go:364] duration metric: took 357.291µs to acquireMachinesLock for "functional-900000"
I0318 04:21:07.432410   15870 start.go:96] Skipping create...Using existing machine configuration
I0318 04:21:07.432426   15870 fix.go:54] fixHost starting: 
I0318 04:21:07.433214   15870 fix.go:112] recreateIfNeeded on functional-900000: state=Stopped err=<nil>
W0318 04:21:07.433231   15870 fix.go:138] unexpected machine state, will restart: <nil>
I0318 04:21:07.437592   15870 out.go:177] * Restarting existing qemu2 VM for "functional-900000" ...
I0318 04:21:07.445697   15870 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/functional-900000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/functional-900000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/functional-900000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:42:d1:c7:a8:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/functional-900000/disk.qcow2
I0318 04:21:07.455954   15870 main.go:141] libmachine: STDOUT: 
I0318 04:21:07.455999   15870 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0318 04:21:07.456069   15870 fix.go:56] duration metric: took 23.646917ms for fixHost
I0318 04:21:07.456079   15870 start.go:83] releasing machines lock for "functional-900000", held for 23.832042ms
W0318 04:21:07.456233   15870 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-900000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0318 04:21:07.464580   15870 out.go:177] 
W0318 04:21:07.480665   15870 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0318 04:21:07.480712   15870 out.go:239] * 
W0318 04:21:07.483339   15870 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0318 04:21:07.491576   15870 out.go:177] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-900000 apply -f testdata/invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-900000 apply -f testdata/invalidsvc.yaml: exit status 1 (26.78825ms)

                                                
                                                
** stderr ** 
	error: context "functional-900000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-900000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-900000 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-900000 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-900000 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-900000 --alsologtostderr -v=1] stderr:
I0318 04:22:00.592793   16191 out.go:291] Setting OutFile to fd 1 ...
I0318 04:22:00.593217   16191 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 04:22:00.593221   16191 out.go:304] Setting ErrFile to fd 2...
I0318 04:22:00.593223   16191 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 04:22:00.593360   16191 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
I0318 04:22:00.593627   16191 mustload.go:65] Loading cluster: functional-900000
I0318 04:22:00.593806   16191 config.go:182] Loaded profile config "functional-900000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 04:22:00.598537   16191 out.go:177] * The control-plane node functional-900000 host is not running: state=Stopped
I0318 04:22:00.601566   16191 out.go:177]   To start a cluster, run: "minikube start -p functional-900000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-900000 -n functional-900000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-900000 -n functional-900000: exit status 7 (42.721125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-900000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 status
functional_test.go:850: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 status: exit status 7 (31.911208ms)

                                                
                                                
-- stdout --
	functional-900000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:852: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-900000 status" : exit status 7
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (32.183125ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:858: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-900000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 status -o json
functional_test.go:868: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 status -o json: exit status 7 (31.661167ms)

                                                
                                                
-- stdout --
	{"Name":"functional-900000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:870: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-900000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-900000 -n functional-900000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-900000 -n functional-900000: exit status 7 (31.7305ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-900000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-900000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1623: (dbg) Non-zero exit: kubectl --context functional-900000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.852ms)

                                                
                                                
** stderr ** 
	error: context "functional-900000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1629: failed to create hello-node deployment with this command "kubectl --context functional-900000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-900000 describe po hello-node-connect
functional_test.go:1598: (dbg) Non-zero exit: kubectl --context functional-900000 describe po hello-node-connect: exit status 1 (26.182708ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-900000

                                                
                                                
** /stderr **
functional_test.go:1600: "kubectl --context functional-900000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1602: hello-node pod describe:
functional_test.go:1604: (dbg) Run:  kubectl --context functional-900000 logs -l app=hello-node-connect
functional_test.go:1604: (dbg) Non-zero exit: kubectl --context functional-900000 logs -l app=hello-node-connect: exit status 1 (26.640208ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-900000

                                                
                                                
** /stderr **
functional_test.go:1606: "kubectl --context functional-900000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1608: hello-node logs:
functional_test.go:1610: (dbg) Run:  kubectl --context functional-900000 describe svc hello-node-connect
functional_test.go:1610: (dbg) Non-zero exit: kubectl --context functional-900000 describe svc hello-node-connect: exit status 1 (26.127958ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-900000

                                                
                                                
** /stderr **
functional_test.go:1612: "kubectl --context functional-900000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1614: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-900000 -n functional-900000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-900000 -n functional-900000: exit status 7 (32.325333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-900000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-900000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-900000 -n functional-900000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-900000 -n functional-900000: exit status 7 (32.097541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-900000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 ssh "echo hello"
functional_test.go:1721: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 ssh "echo hello": exit status 83 (44.608958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
functional_test.go:1726: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-900000 ssh \"echo hello\"" : exit status 83
functional_test.go:1730: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-900000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-900000\"\n"*. args "out/minikube-darwin-arm64 -p functional-900000 ssh \"echo hello\""
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 ssh "cat /etc/hostname": exit status 83 (45.024958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
functional_test.go:1744: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-900000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1748: expected minikube ssh command output to be -"functional-900000"- but got *"* The control-plane node functional-900000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-900000\"\n"*. args "out/minikube-darwin-arm64 -p functional-900000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-900000 -n functional-900000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-900000 -n functional-900000: exit status 7 (31.929667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-900000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (53.188542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-900000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 ssh -n functional-900000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 ssh -n functional-900000 "sudo cat /home/docker/cp-test.txt": exit status 83 (45.027333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-900000 ssh -n functional-900000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-900000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-900000\"\n",
}, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 cp functional-900000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd3707730708/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 cp functional-900000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd3707730708/001/cp-test.txt: exit status 83 (42.539792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-900000 cp functional-900000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd3707730708/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 ssh -n functional-900000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 ssh -n functional-900000 "sudo cat /home/docker/cp-test.txt": exit status 83 (42.844875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-900000 ssh -n functional-900000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd3707730708/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
string(
- 	"* The control-plane node functional-900000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-900000\"\n",
+ 	"",
)
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (43.803125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-900000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 ssh -n functional-900000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 ssh -n functional-900000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (47.792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-900000 ssh -n functional-900000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-900000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-900000\"\n",
}, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/15481/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 ssh "sudo cat /etc/test/nested/copy/15481/hosts"
functional_test.go:1927: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 ssh "sudo cat /etc/test/nested/copy/15481/hosts": exit status 83 (42.254625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
functional_test.go:1929: out/minikube-darwin-arm64 -p functional-900000 ssh "sudo cat /etc/test/nested/copy/15481/hosts" failed: exit status 83
functional_test.go:1932: file sync test content: * The control-plane node functional-900000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-900000"
functional_test.go:1942: /etc/sync.test content mismatch (-want +got):
strings.Join({
+ 	"* ",
	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-900000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-900000\"\n",
}, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-900000 -n functional-900000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-900000 -n functional-900000: exit status 7 (32.627917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-900000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/15481.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 ssh "sudo cat /etc/ssl/certs/15481.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 ssh "sudo cat /etc/ssl/certs/15481.pem": exit status 83 (43.609041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/15481.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-900000 ssh \"sudo cat /etc/ssl/certs/15481.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/15481.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-900000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-900000"
	"""
)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/15481.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 ssh "sudo cat /usr/share/ca-certificates/15481.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 ssh "sudo cat /usr/share/ca-certificates/15481.pem": exit status 83 (45.678791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/usr/share/ca-certificates/15481.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-900000 ssh \"sudo cat /usr/share/ca-certificates/15481.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/15481.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-900000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-900000"
	"""
)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (44.774208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-900000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-900000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-900000"
	"""
)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/154812.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 ssh "sudo cat /etc/ssl/certs/154812.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 ssh "sudo cat /etc/ssl/certs/154812.pem": exit status 83 (42.637542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/154812.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-900000 ssh \"sudo cat /etc/ssl/certs/154812.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/154812.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-900000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-900000"
	"""
)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/154812.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 ssh "sudo cat /usr/share/ca-certificates/154812.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 ssh "sudo cat /usr/share/ca-certificates/154812.pem": exit status 83 (42.695709ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/usr/share/ca-certificates/154812.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-900000 ssh \"sudo cat /usr/share/ca-certificates/154812.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/154812.pem mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-900000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-900000"
	"""
)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (44.785208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-900000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
(
	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-900000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-900000"
	"""
)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-900000 -n functional-900000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-900000 -n functional-900000: exit status 7 (32.250292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-900000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-900000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-900000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (26.054041ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-900000

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-900000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-900000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-900000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-900000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-900000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-900000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-900000 -n functional-900000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-900000 -n functional-900000: exit status 7 (31.785167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-900000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 ssh "sudo systemctl is-active crio": exit status 83 (42.395167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
functional_test.go:2026: output of 
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2029: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-900000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-900000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 version -o=json --components
functional_test.go:2266: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 version -o=json --components: exit status 83 (43.87575ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
functional_test.go:2268: error version: exit status 83
functional_test.go:2273: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-900000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-900000"
functional_test.go:2273: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-900000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-900000"
functional_test.go:2273: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-900000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-900000"
functional_test.go:2273: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-900000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-900000"
functional_test.go:2273: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-900000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-900000"
functional_test.go:2273: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-900000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-900000"
functional_test.go:2273: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-900000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-900000"
functional_test.go:2273: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-900000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-900000"
functional_test.go:2273: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-900000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-900000"
functional_test.go:2273: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-900000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-900000"
--- FAIL: TestFunctional/parallel/Version/components (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-900000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-900000 image ls --format short --alsologtostderr:
I0318 04:22:01.007864   16206 out.go:291] Setting OutFile to fd 1 ...
I0318 04:22:01.008061   16206 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 04:22:01.008064   16206 out.go:304] Setting ErrFile to fd 2...
I0318 04:22:01.008066   16206 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 04:22:01.008194   16206 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
I0318 04:22:01.008630   16206 config.go:182] Loaded profile config "functional-900000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 04:22:01.008697   16206 config.go:182] Loaded profile config "functional-900000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-900000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-900000 image ls --format table --alsologtostderr:
I0318 04:22:01.241219   16218 out.go:291] Setting OutFile to fd 1 ...
I0318 04:22:01.241380   16218 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 04:22:01.241383   16218 out.go:304] Setting ErrFile to fd 2...
I0318 04:22:01.241385   16218 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 04:22:01.241519   16218 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
I0318 04:22:01.242018   16218 config.go:182] Loaded profile config "functional-900000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 04:22:01.242079   16218 config.go:182] Loaded profile config "functional-900000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
functional_test.go:274: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-900000 image ls --format json --alsologtostderr:
[]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-900000 image ls --format json --alsologtostderr:
I0318 04:22:01.203352   16216 out.go:291] Setting OutFile to fd 1 ...
I0318 04:22:01.203479   16216 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 04:22:01.203482   16216 out.go:304] Setting ErrFile to fd 2...
I0318 04:22:01.203484   16216 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 04:22:01.203613   16216 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
I0318 04:22:01.204039   16216 config.go:182] Loaded profile config "functional-900000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 04:22:01.204101   16216 config.go:182] Loaded profile config "functional-900000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
functional_test.go:274: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-900000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-900000 image ls --format yaml --alsologtostderr:
I0318 04:22:01.044241   16208 out.go:291] Setting OutFile to fd 1 ...
I0318 04:22:01.044486   16208 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 04:22:01.044492   16208 out.go:304] Setting ErrFile to fd 2...
I0318 04:22:01.044495   16208 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 04:22:01.044677   16208 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
I0318 04:22:01.045323   16208 config.go:182] Loaded profile config "functional-900000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 04:22:01.045386   16208 config.go:182] Loaded profile config "functional-900000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
functional_test.go:274: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 ssh pgrep buildkitd: exit status 83 (43.740417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 image build -t localhost/my-image:functional-900000 testdata/build --alsologtostderr
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-900000 image build -t localhost/my-image:functional-900000 testdata/build --alsologtostderr:
I0318 04:22:01.126121   16212 out.go:291] Setting OutFile to fd 1 ...
I0318 04:22:01.127113   16212 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 04:22:01.127118   16212 out.go:304] Setting ErrFile to fd 2...
I0318 04:22:01.127121   16212 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 04:22:01.127267   16212 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
I0318 04:22:01.127657   16212 config.go:182] Loaded profile config "functional-900000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 04:22:01.128088   16212 config.go:182] Loaded profile config "functional-900000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 04:22:01.128310   16212 build_images.go:133] succeeded building to: 
I0318 04:22:01.128314   16212 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 image ls
functional_test.go:442: expected "localhost/my-image:functional-900000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-900000 docker-env) && out/minikube-darwin-arm64 status -p functional-900000"
functional_test.go:495: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-900000 docker-env) && out/minikube-darwin-arm64 status -p functional-900000": exit status 1 (47.476583ms)
functional_test.go:501: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 update-context --alsologtostderr -v=2: exit status 83 (43.787542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:22:00.875389   16200 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:22:00.875979   16200 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:22:00.875983   16200 out.go:304] Setting ErrFile to fd 2...
	I0318 04:22:00.875986   16200 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:22:00.876136   16200 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:22:00.876364   16200 mustload.go:65] Loading cluster: functional-900000
	I0318 04:22:00.876581   16200 config.go:182] Loaded profile config "functional-900000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:22:00.880003   16200 out.go:177] * The control-plane node functional-900000 host is not running: state=Stopped
	I0318 04:22:00.884008   16200 out.go:177]   To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-900000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-900000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-900000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 update-context --alsologtostderr -v=2: exit status 83 (43.49175ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:22:00.964652   16204 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:22:00.964802   16204 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:22:00.964806   16204 out.go:304] Setting ErrFile to fd 2...
	I0318 04:22:00.964811   16204 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:22:00.964951   16204 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:22:00.965168   16204 mustload.go:65] Loading cluster: functional-900000
	I0318 04:22:00.965369   16204 config.go:182] Loaded profile config "functional-900000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:22:00.968898   16204 out.go:177] * The control-plane node functional-900000 host is not running: state=Stopped
	I0318 04:22:00.972941   16204 out.go:177]   To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-900000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-900000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-900000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 update-context --alsologtostderr -v=2: exit status 83 (44.781125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:22:00.919482   16202 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:22:00.919641   16202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:22:00.919644   16202 out.go:304] Setting ErrFile to fd 2...
	I0318 04:22:00.919646   16202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:22:00.919761   16202 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:22:00.919981   16202 mustload.go:65] Loading cluster: functional-900000
	I0318 04:22:00.920174   16202 config.go:182] Loaded profile config "functional-900000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:22:00.925022   16202 out.go:177] * The control-plane node functional-900000 host is not running: state=Stopped
	I0318 04:22:00.929006   16202 out.go:177]   To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-900000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-900000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-900000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-900000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1433: (dbg) Non-zero exit: kubectl --context functional-900000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.381333ms)

                                                
                                                
** stderr ** 
	error: context "functional-900000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1439: failed to create hello-node deployment with this command "kubectl --context functional-900000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 service list
functional_test.go:1455: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 service list: exit status 83 (45.367792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
functional_test.go:1457: failed to do service list. args "out/minikube-darwin-arm64 -p functional-900000 service list" : exit status 83
functional_test.go:1460: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-900000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-900000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 service list -o json
functional_test.go:1485: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 service list -o json: exit status 83 (44.849292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
functional_test.go:1487: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-900000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 service --namespace=default --https --url hello-node: exit status 83 (44.697125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
functional_test.go:1507: failed to get service url. args "out/minikube-darwin-arm64 -p functional-900000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 service hello-node --url --format={{.IP}}: exit status 83 (43.880584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-900000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1544: "* The control-plane node functional-900000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-900000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 service hello-node --url: exit status 83 (43.872334ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
functional_test.go:1557: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-900000 service hello-node --url": exit status 83
functional_test.go:1561: found endpoint for hello-node: * The control-plane node functional-900000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-900000"
functional_test.go:1565: failed to parse "* The control-plane node functional-900000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-900000\"": parse "* The control-plane node functional-900000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-900000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-900000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-900000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I0318 04:21:10.531123   15988 out.go:291] Setting OutFile to fd 1 ...
I0318 04:21:10.531293   15988 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 04:21:10.531296   15988 out.go:304] Setting ErrFile to fd 2...
I0318 04:21:10.531299   15988 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 04:21:10.531441   15988 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
I0318 04:21:10.531653   15988 mustload.go:65] Loading cluster: functional-900000
I0318 04:21:10.531856   15988 config.go:182] Loaded profile config "functional-900000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 04:21:10.535478   15988 out.go:177] * The control-plane node functional-900000 host is not running: state=Stopped
I0318 04:21:10.548443   15988 out.go:177]   To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
stdout: * The control-plane node functional-900000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-900000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-900000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 15989: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-900000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-900000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-900000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-900000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-900000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-900000": client config: context "functional-900000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (102.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-900000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-900000 get svc nginx-svc: exit status 1 (68.71975ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-900000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-900000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (102.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 image load --daemon gcr.io/google-containers/addon-resizer:functional-900000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-900000 image load --daemon gcr.io/google-containers/addon-resizer:functional-900000 --alsologtostderr: (1.300973s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-900000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 image load --daemon gcr.io/google-containers/addon-resizer:functional-900000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-900000 image load --daemon gcr.io/google-containers/addon-resizer:functional-900000 --alsologtostderr: (1.311652709s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-900000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (5.472016583s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-900000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 image load --daemon gcr.io/google-containers/addon-resizer:functional-900000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-900000 image load --daemon gcr.io/google-containers/addon-resizer:functional-900000 --alsologtostderr: (1.164339625s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-900000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 image save gcr.io/google-containers/addon-resizer:functional-900000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/Users/jenkins/workspace/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-900000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.030521916s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 16 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (29.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (29.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (9.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-218000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-218000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (9.923603417s)

                                                
                                                
-- stdout --
	* [ha-218000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-218000" primary control-plane node in "ha-218000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-218000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:23:48.344540   16258 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:23:48.344703   16258 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:23:48.344706   16258 out.go:304] Setting ErrFile to fd 2...
	I0318 04:23:48.344708   16258 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:23:48.344832   16258 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:23:48.345902   16258 out.go:298] Setting JSON to false
	I0318 04:23:48.362118   16258 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8601,"bootTime":1710752427,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:23:48.362184   16258 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:23:48.369065   16258 out.go:177] * [ha-218000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:23:48.377057   16258 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 04:23:48.380161   16258 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:23:48.377096   16258 notify.go:220] Checking for updates...
	I0318 04:23:48.386071   16258 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:23:48.389147   16258 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:23:48.392122   16258 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	I0318 04:23:48.395128   16258 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:23:48.398290   16258 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:23:48.403144   16258 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 04:23:48.410068   16258 start.go:297] selected driver: qemu2
	I0318 04:23:48.410074   16258 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:23:48.410082   16258 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:23:48.412398   16258 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:23:48.416062   16258 out.go:177] * Automatically selected the socket_vmnet network
	I0318 04:23:48.419085   16258 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:23:48.419135   16258 cni.go:84] Creating CNI manager for ""
	I0318 04:23:48.419140   16258 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0318 04:23:48.419144   16258 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0318 04:23:48.419179   16258 start.go:340] cluster config:
	{Name:ha-218000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:23:48.423660   16258 iso.go:125] acquiring lock: {Name:mkb8143674083e0c7a46a3ed751b3800392bcd24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:23:48.431112   16258 out.go:177] * Starting "ha-218000" primary control-plane node in "ha-218000" cluster
	I0318 04:23:48.435093   16258 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:23:48.435111   16258 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:23:48.435121   16258 cache.go:56] Caching tarball of preloaded images
	I0318 04:23:48.435182   16258 preload.go:173] Found /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:23:48.435188   16258 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:23:48.435429   16258 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/ha-218000/config.json ...
	I0318 04:23:48.435440   16258 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/ha-218000/config.json: {Name:mk888e0b41b5859535be10ecfe0818723064772c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:23:48.435662   16258 start.go:360] acquireMachinesLock for ha-218000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:23:48.435695   16258 start.go:364] duration metric: took 27.291µs to acquireMachinesLock for "ha-218000"
	I0318 04:23:48.435709   16258 start.go:93] Provisioning new machine with config: &{Name:ha-218000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.28.4 ClusterName:ha-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:23:48.435734   16258 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:23:48.444047   16258 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 04:23:48.462457   16258 start.go:159] libmachine.API.Create for "ha-218000" (driver="qemu2")
	I0318 04:23:48.462484   16258 client.go:168] LocalClient.Create starting
	I0318 04:23:48.462543   16258 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem
	I0318 04:23:48.462577   16258 main.go:141] libmachine: Decoding PEM data...
	I0318 04:23:48.462585   16258 main.go:141] libmachine: Parsing certificate...
	I0318 04:23:48.462631   16258 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem
	I0318 04:23:48.462654   16258 main.go:141] libmachine: Decoding PEM data...
	I0318 04:23:48.462664   16258 main.go:141] libmachine: Parsing certificate...
	I0318 04:23:48.463094   16258 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18429-15072/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:23:48.605477   16258 main.go:141] libmachine: Creating SSH key...
	I0318 04:23:48.710968   16258 main.go:141] libmachine: Creating Disk image...
	I0318 04:23:48.710975   16258 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:23:48.711182   16258 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/ha-218000/disk.qcow2.raw /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/ha-218000/disk.qcow2
	I0318 04:23:48.723599   16258 main.go:141] libmachine: STDOUT: 
	I0318 04:23:48.723635   16258 main.go:141] libmachine: STDERR: 
	I0318 04:23:48.723687   16258 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/ha-218000/disk.qcow2 +20000M
	I0318 04:23:48.734255   16258 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:23:48.734281   16258 main.go:141] libmachine: STDERR: 
	I0318 04:23:48.734300   16258 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/ha-218000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/ha-218000/disk.qcow2
	I0318 04:23:48.734305   16258 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:23:48.734336   16258 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/ha-218000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/ha-218000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/ha-218000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:b7:d6:4e:2f:21 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/ha-218000/disk.qcow2
	I0318 04:23:48.736105   16258 main.go:141] libmachine: STDOUT: 
	I0318 04:23:48.736125   16258 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:23:48.736146   16258 client.go:171] duration metric: took 273.663042ms to LocalClient.Create
	I0318 04:23:50.738261   16258 start.go:128] duration metric: took 2.302580958s to createHost
	I0318 04:23:50.738307   16258 start.go:83] releasing machines lock for "ha-218000", held for 2.302680208s
	W0318 04:23:50.738378   16258 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:23:50.755622   16258 out.go:177] * Deleting "ha-218000" in qemu2 ...
	W0318 04:23:50.780632   16258 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:23:50.780665   16258 start.go:728] Will try again in 5 seconds ...
	I0318 04:23:55.782709   16258 start.go:360] acquireMachinesLock for ha-218000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:23:55.783180   16258 start.go:364] duration metric: took 371.541µs to acquireMachinesLock for "ha-218000"
	I0318 04:23:55.783305   16258 start.go:93] Provisioning new machine with config: &{Name:ha-218000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.28.4 ClusterName:ha-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:23:55.783582   16258 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:23:55.795263   16258 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 04:23:55.843273   16258 start.go:159] libmachine.API.Create for "ha-218000" (driver="qemu2")
	I0318 04:23:55.843328   16258 client.go:168] LocalClient.Create starting
	I0318 04:23:55.843436   16258 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem
	I0318 04:23:55.843511   16258 main.go:141] libmachine: Decoding PEM data...
	I0318 04:23:55.843525   16258 main.go:141] libmachine: Parsing certificate...
	I0318 04:23:55.843583   16258 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem
	I0318 04:23:55.843623   16258 main.go:141] libmachine: Decoding PEM data...
	I0318 04:23:55.843636   16258 main.go:141] libmachine: Parsing certificate...
	I0318 04:23:55.844248   16258 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18429-15072/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:23:55.995889   16258 main.go:141] libmachine: Creating SSH key...
	I0318 04:23:56.153269   16258 main.go:141] libmachine: Creating Disk image...
	I0318 04:23:56.153278   16258 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:23:56.153500   16258 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/ha-218000/disk.qcow2.raw /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/ha-218000/disk.qcow2
	I0318 04:23:56.166328   16258 main.go:141] libmachine: STDOUT: 
	I0318 04:23:56.166346   16258 main.go:141] libmachine: STDERR: 
	I0318 04:23:56.166408   16258 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/ha-218000/disk.qcow2 +20000M
	I0318 04:23:56.177045   16258 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:23:56.177062   16258 main.go:141] libmachine: STDERR: 
	I0318 04:23:56.177078   16258 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/ha-218000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/ha-218000/disk.qcow2
	I0318 04:23:56.177082   16258 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:23:56.177112   16258 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/ha-218000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/ha-218000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/ha-218000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:62:1d:92:22:c9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/ha-218000/disk.qcow2
	I0318 04:23:56.178854   16258 main.go:141] libmachine: STDOUT: 
	I0318 04:23:56.178869   16258 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:23:56.178881   16258 client.go:171] duration metric: took 335.558459ms to LocalClient.Create
	I0318 04:23:58.180983   16258 start.go:128] duration metric: took 2.397447291s to createHost
	I0318 04:23:58.181038   16258 start.go:83] releasing machines lock for "ha-218000", held for 2.397908292s
	W0318 04:23:58.181435   16258 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-218000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-218000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:23:58.192106   16258 out.go:177] 
	W0318 04:23:58.200105   16258 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:23:58.200140   16258 out.go:239] * 
	* 
	W0318 04:23:58.202771   16258 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:23:58.220657   16258 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-218000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-218000 -n ha-218000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-218000 -n ha-218000: exit status 7 (68.246541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-218000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (9.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (107.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-218000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-218000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (60.274083ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-218000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-218000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-218000 -- rollout status deployment/busybox: exit status 1 (58.298334ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-218000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-218000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-218000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (57.720667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-218000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-218000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-218000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.64425ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-218000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-218000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-218000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.205ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-218000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-218000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-218000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.686333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-218000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-218000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-218000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.723ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-218000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-218000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-218000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.54525ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-218000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-218000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-218000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.838084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-218000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-218000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-218000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.585208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-218000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-218000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-218000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.311333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-218000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-218000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-218000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.545ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-218000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-218000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-218000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.580209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-218000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-218000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-218000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.673333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-218000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-218000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-218000 -- exec  -- nslookup kubernetes.io: exit status 1 (57.641042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-218000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-218000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-218000 -- exec  -- nslookup kubernetes.default: exit status 1 (57.882625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-218000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-218000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-218000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (57.583291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-218000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-218000 -n ha-218000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-218000 -n ha-218000: exit status 7 (32.605292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-218000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (107.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-218000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-218000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.074959ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-218000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-218000 -n ha-218000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-218000 -n ha-218000: exit status 7 (32.129458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-218000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-218000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-218000 -v=7 --alsologtostderr: exit status 83 (45.170917ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-218000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-218000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:25:45.459268   16343 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:25:45.459843   16343 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:25:45.459847   16343 out.go:304] Setting ErrFile to fd 2...
	I0318 04:25:45.459853   16343 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:25:45.460009   16343 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:25:45.460227   16343 mustload.go:65] Loading cluster: ha-218000
	I0318 04:25:45.460425   16343 config.go:182] Loaded profile config "ha-218000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:25:45.465436   16343 out.go:177] * The control-plane node ha-218000 host is not running: state=Stopped
	I0318 04:25:45.469410   16343 out.go:177]   To start a cluster, run: "minikube start -p ha-218000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-218000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-218000 -n ha-218000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-218000 -n ha-218000: exit status 7 (31.261958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-218000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-218000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-218000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.487541ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-218000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-218000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-218000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-218000 -n ha-218000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-218000 -n ha-218000: exit status 7 (32.258625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-218000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-218000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-218000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-218000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-218000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-218000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-218000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-218000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-218000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-218000 -n ha-218000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-218000 -n ha-218000: exit status 7 (32.251375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-218000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-218000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-218000 status --output json -v=7 --alsologtostderr: exit status 7 (32.633958ms)

                                                
                                                
-- stdout --
	{"Name":"ha-218000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:25:45.702405   16356 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:25:45.702534   16356 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:25:45.702539   16356 out.go:304] Setting ErrFile to fd 2...
	I0318 04:25:45.702541   16356 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:25:45.702675   16356 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:25:45.702806   16356 out.go:298] Setting JSON to true
	I0318 04:25:45.702821   16356 mustload.go:65] Loading cluster: ha-218000
	I0318 04:25:45.702872   16356 notify.go:220] Checking for updates...
	I0318 04:25:45.703013   16356 config.go:182] Loaded profile config "ha-218000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:25:45.703019   16356 status.go:255] checking status of ha-218000 ...
	I0318 04:25:45.703223   16356 status.go:330] ha-218000 host status = "Stopped" (err=<nil>)
	I0318 04:25:45.703227   16356 status.go:343] host is not running, skipping remaining checks
	I0318 04:25:45.703229   16356 status.go:257] ha-218000 status: &{Name:ha-218000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:333: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-218000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-218000 -n ha-218000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-218000 -n ha-218000: exit status 7 (31.951709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-218000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-218000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-218000 node stop m02 -v=7 --alsologtostderr: exit status 85 (49.008084ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:25:45.766696   16360 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:25:45.767362   16360 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:25:45.767368   16360 out.go:304] Setting ErrFile to fd 2...
	I0318 04:25:45.767371   16360 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:25:45.767674   16360 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:25:45.768059   16360 mustload.go:65] Loading cluster: ha-218000
	I0318 04:25:45.768252   16360 config.go:182] Loaded profile config "ha-218000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:25:45.771823   16360 out.go:177] 
	W0318 04:25:45.775787   16360 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0318 04:25:45.775791   16360 out.go:239] * 
	* 
	W0318 04:25:45.778382   16360 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:25:45.781751   16360 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-218000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-218000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-218000 status -v=7 --alsologtostderr: exit status 7 (32.363666ms)

                                                
                                                
-- stdout --
	ha-218000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:25:45.816525   16362 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:25:45.816697   16362 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:25:45.816701   16362 out.go:304] Setting ErrFile to fd 2...
	I0318 04:25:45.816703   16362 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:25:45.816837   16362 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:25:45.816962   16362 out.go:298] Setting JSON to false
	I0318 04:25:45.816974   16362 mustload.go:65] Loading cluster: ha-218000
	I0318 04:25:45.817041   16362 notify.go:220] Checking for updates...
	I0318 04:25:45.817170   16362 config.go:182] Loaded profile config "ha-218000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:25:45.817177   16362 status.go:255] checking status of ha-218000 ...
	I0318 04:25:45.817380   16362 status.go:330] ha-218000 host status = "Stopped" (err=<nil>)
	I0318 04:25:45.817385   16362 status.go:343] host is not running, skipping remaining checks
	I0318 04:25:45.817387   16362 status.go:257] ha-218000 status: &{Name:ha-218000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-218000 status -v=7 --alsologtostderr": ha-218000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-218000 status -v=7 --alsologtostderr": ha-218000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-218000 status -v=7 --alsologtostderr": ha-218000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-218000 status -v=7 --alsologtostderr": ha-218000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-218000 -n ha-218000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-218000 -n ha-218000: exit status 7 (32.365792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-218000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-218000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-218000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-218000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-218000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-218000 -n ha-218000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-218000 -n ha-218000: exit status 7 (31.491167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-218000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (48.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-218000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-218000 node start m02 -v=7 --alsologtostderr: exit status 85 (51.183083ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:25:45.986531   16372 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:25:45.986898   16372 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:25:45.986902   16372 out.go:304] Setting ErrFile to fd 2...
	I0318 04:25:45.986905   16372 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:25:45.987058   16372 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:25:45.987286   16372 mustload.go:65] Loading cluster: ha-218000
	I0318 04:25:45.987476   16372 config.go:182] Loaded profile config "ha-218000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:25:45.991261   16372 out.go:177] 
	W0318 04:25:45.995018   16372 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0318 04:25:45.995023   16372 out.go:239] * 
	* 
	W0318 04:25:45.997040   16372 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:25:46.002084   16372 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0318 04:25:45.986531   16372 out.go:291] Setting OutFile to fd 1 ...
I0318 04:25:45.986898   16372 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 04:25:45.986902   16372 out.go:304] Setting ErrFile to fd 2...
I0318 04:25:45.986905   16372 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 04:25:45.987058   16372 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
I0318 04:25:45.987286   16372 mustload.go:65] Loading cluster: ha-218000
I0318 04:25:45.987476   16372 config.go:182] Loaded profile config "ha-218000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 04:25:45.991261   16372 out.go:177] 
W0318 04:25:45.995018   16372 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0318 04:25:45.995023   16372 out.go:239] * 
* 
W0318 04:25:45.997040   16372 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0318 04:25:46.002084   16372 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-218000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-218000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-218000 status -v=7 --alsologtostderr: exit status 7 (32.17175ms)

                                                
                                                
-- stdout --
	ha-218000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:25:46.037549   16374 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:25:46.037712   16374 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:25:46.037715   16374 out.go:304] Setting ErrFile to fd 2...
	I0318 04:25:46.037717   16374 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:25:46.037839   16374 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:25:46.037965   16374 out.go:298] Setting JSON to false
	I0318 04:25:46.037976   16374 mustload.go:65] Loading cluster: ha-218000
	I0318 04:25:46.038034   16374 notify.go:220] Checking for updates...
	I0318 04:25:46.038188   16374 config.go:182] Loaded profile config "ha-218000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:25:46.038195   16374 status.go:255] checking status of ha-218000 ...
	I0318 04:25:46.038409   16374 status.go:330] ha-218000 host status = "Stopped" (err=<nil>)
	I0318 04:25:46.038412   16374 status.go:343] host is not running, skipping remaining checks
	I0318 04:25:46.038414   16374 status.go:257] ha-218000 status: &{Name:ha-218000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-218000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-218000 status -v=7 --alsologtostderr: exit status 7 (76.929292ms)

                                                
                                                
-- stdout --
	ha-218000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:25:46.863620   16376 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:25:46.863773   16376 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:25:46.863777   16376 out.go:304] Setting ErrFile to fd 2...
	I0318 04:25:46.863786   16376 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:25:46.863920   16376 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:25:46.864049   16376 out.go:298] Setting JSON to false
	I0318 04:25:46.864063   16376 mustload.go:65] Loading cluster: ha-218000
	I0318 04:25:46.864095   16376 notify.go:220] Checking for updates...
	I0318 04:25:46.864275   16376 config.go:182] Loaded profile config "ha-218000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:25:46.864283   16376 status.go:255] checking status of ha-218000 ...
	I0318 04:25:46.864552   16376 status.go:330] ha-218000 host status = "Stopped" (err=<nil>)
	I0318 04:25:46.864557   16376 status.go:343] host is not running, skipping remaining checks
	I0318 04:25:46.864560   16376 status.go:257] ha-218000 status: &{Name:ha-218000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-218000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-218000 status -v=7 --alsologtostderr: exit status 7 (77.142417ms)

                                                
                                                
-- stdout --
	ha-218000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:25:48.531136   16378 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:25:48.531284   16378 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:25:48.531288   16378 out.go:304] Setting ErrFile to fd 2...
	I0318 04:25:48.531291   16378 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:25:48.531443   16378 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:25:48.531605   16378 out.go:298] Setting JSON to false
	I0318 04:25:48.531619   16378 mustload.go:65] Loading cluster: ha-218000
	I0318 04:25:48.531659   16378 notify.go:220] Checking for updates...
	I0318 04:25:48.531839   16378 config.go:182] Loaded profile config "ha-218000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:25:48.531846   16378 status.go:255] checking status of ha-218000 ...
	I0318 04:25:48.532081   16378 status.go:330] ha-218000 host status = "Stopped" (err=<nil>)
	I0318 04:25:48.532085   16378 status.go:343] host is not running, skipping remaining checks
	I0318 04:25:48.532088   16378 status.go:257] ha-218000 status: &{Name:ha-218000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-218000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-218000 status -v=7 --alsologtostderr: exit status 7 (77.194625ms)

                                                
                                                
-- stdout --
	ha-218000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:25:50.692785   16380 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:25:50.692975   16380 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:25:50.692980   16380 out.go:304] Setting ErrFile to fd 2...
	I0318 04:25:50.692982   16380 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:25:50.693136   16380 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:25:50.693290   16380 out.go:298] Setting JSON to false
	I0318 04:25:50.693304   16380 mustload.go:65] Loading cluster: ha-218000
	I0318 04:25:50.693346   16380 notify.go:220] Checking for updates...
	I0318 04:25:50.693539   16380 config.go:182] Loaded profile config "ha-218000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:25:50.693547   16380 status.go:255] checking status of ha-218000 ...
	I0318 04:25:50.693797   16380 status.go:330] ha-218000 host status = "Stopped" (err=<nil>)
	I0318 04:25:50.693802   16380 status.go:343] host is not running, skipping remaining checks
	I0318 04:25:50.693805   16380 status.go:257] ha-218000 status: &{Name:ha-218000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-218000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-218000 status -v=7 --alsologtostderr: exit status 7 (77.228625ms)

                                                
                                                
-- stdout --
	ha-218000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:25:55.332713   16382 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:25:55.332884   16382 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:25:55.332888   16382 out.go:304] Setting ErrFile to fd 2...
	I0318 04:25:55.332892   16382 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:25:55.333049   16382 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:25:55.333222   16382 out.go:298] Setting JSON to false
	I0318 04:25:55.333236   16382 mustload.go:65] Loading cluster: ha-218000
	I0318 04:25:55.333278   16382 notify.go:220] Checking for updates...
	I0318 04:25:55.333501   16382 config.go:182] Loaded profile config "ha-218000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:25:55.333510   16382 status.go:255] checking status of ha-218000 ...
	I0318 04:25:55.333774   16382 status.go:330] ha-218000 host status = "Stopped" (err=<nil>)
	I0318 04:25:55.333779   16382 status.go:343] host is not running, skipping remaining checks
	I0318 04:25:55.333782   16382 status.go:257] ha-218000 status: &{Name:ha-218000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-218000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-218000 status -v=7 --alsologtostderr: exit status 7 (75.701375ms)

                                                
                                                
-- stdout --
	ha-218000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:25:58.806108   16387 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:25:58.806270   16387 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:25:58.806275   16387 out.go:304] Setting ErrFile to fd 2...
	I0318 04:25:58.806277   16387 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:25:58.806430   16387 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:25:58.806588   16387 out.go:298] Setting JSON to false
	I0318 04:25:58.806606   16387 mustload.go:65] Loading cluster: ha-218000
	I0318 04:25:58.806644   16387 notify.go:220] Checking for updates...
	I0318 04:25:58.806901   16387 config.go:182] Loaded profile config "ha-218000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:25:58.806910   16387 status.go:255] checking status of ha-218000 ...
	I0318 04:25:58.807182   16387 status.go:330] ha-218000 host status = "Stopped" (err=<nil>)
	I0318 04:25:58.807187   16387 status.go:343] host is not running, skipping remaining checks
	I0318 04:25:58.807190   16387 status.go:257] ha-218000 status: &{Name:ha-218000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-218000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-218000 status -v=7 --alsologtostderr: exit status 7 (80.501292ms)

                                                
                                                
-- stdout --
	ha-218000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:26:09.801426   16389 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:26:09.801597   16389 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:26:09.801602   16389 out.go:304] Setting ErrFile to fd 2...
	I0318 04:26:09.801605   16389 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:26:09.802107   16389 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:26:09.802390   16389 out.go:298] Setting JSON to false
	I0318 04:26:09.802411   16389 mustload.go:65] Loading cluster: ha-218000
	I0318 04:26:09.802692   16389 notify.go:220] Checking for updates...
	I0318 04:26:09.803042   16389 config.go:182] Loaded profile config "ha-218000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:26:09.803058   16389 status.go:255] checking status of ha-218000 ...
	I0318 04:26:09.803328   16389 status.go:330] ha-218000 host status = "Stopped" (err=<nil>)
	I0318 04:26:09.803334   16389 status.go:343] host is not running, skipping remaining checks
	I0318 04:26:09.803337   16389 status.go:257] ha-218000 status: &{Name:ha-218000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-218000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-218000 status -v=7 --alsologtostderr: exit status 7 (76.912542ms)

                                                
                                                
-- stdout --
	ha-218000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:26:17.751825   16391 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:26:17.752051   16391 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:26:17.752056   16391 out.go:304] Setting ErrFile to fd 2...
	I0318 04:26:17.752059   16391 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:26:17.752221   16391 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:26:17.752373   16391 out.go:298] Setting JSON to false
	I0318 04:26:17.752388   16391 mustload.go:65] Loading cluster: ha-218000
	I0318 04:26:17.752426   16391 notify.go:220] Checking for updates...
	I0318 04:26:17.752645   16391 config.go:182] Loaded profile config "ha-218000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:26:17.752654   16391 status.go:255] checking status of ha-218000 ...
	I0318 04:26:17.752911   16391 status.go:330] ha-218000 host status = "Stopped" (err=<nil>)
	I0318 04:26:17.752917   16391 status.go:343] host is not running, skipping remaining checks
	I0318 04:26:17.752920   16391 status.go:257] ha-218000 status: &{Name:ha-218000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-218000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-218000 status -v=7 --alsologtostderr: exit status 7 (77.341583ms)

                                                
                                                
-- stdout --
	ha-218000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:26:34.640960   16393 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:26:34.641370   16393 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:26:34.641376   16393 out.go:304] Setting ErrFile to fd 2...
	I0318 04:26:34.641379   16393 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:26:34.641622   16393 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:26:34.641844   16393 out.go:298] Setting JSON to false
	I0318 04:26:34.641863   16393 mustload.go:65] Loading cluster: ha-218000
	I0318 04:26:34.642026   16393 notify.go:220] Checking for updates...
	I0318 04:26:34.642498   16393 config.go:182] Loaded profile config "ha-218000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:26:34.642514   16393 status.go:255] checking status of ha-218000 ...
	I0318 04:26:34.642789   16393 status.go:330] ha-218000 host status = "Stopped" (err=<nil>)
	I0318 04:26:34.642795   16393 status.go:343] host is not running, skipping remaining checks
	I0318 04:26:34.642798   16393 status.go:257] ha-218000 status: &{Name:ha-218000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-218000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-218000 -n ha-218000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-218000 -n ha-218000: exit status 7 (34.327042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-218000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (48.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-218000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-218000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-218000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-218000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-218000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-218000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-218000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-218000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-218000 -n ha-218000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-218000 -n ha-218000: exit status 7 (32.242333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-218000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (7.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-218000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-218000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-218000 -v=7 --alsologtostderr: (1.79642675s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-218000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-218000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.226578292s)

                                                
                                                
-- stdout --
	* [ha-218000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-218000" primary control-plane node in "ha-218000" cluster
	* Restarting existing qemu2 VM for "ha-218000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-218000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:26:36.679351   16418 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:26:36.679490   16418 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:26:36.679494   16418 out.go:304] Setting ErrFile to fd 2...
	I0318 04:26:36.679497   16418 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:26:36.679668   16418 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:26:36.680812   16418 out.go:298] Setting JSON to false
	I0318 04:26:36.699416   16418 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8769,"bootTime":1710752427,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:26:36.699481   16418 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:26:36.704849   16418 out.go:177] * [ha-218000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:26:36.712824   16418 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 04:26:36.712877   16418 notify.go:220] Checking for updates...
	I0318 04:26:36.719720   16418 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:26:36.722813   16418 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:26:36.725773   16418 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:26:36.728735   16418 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	I0318 04:26:36.731797   16418 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:26:36.733552   16418 config.go:182] Loaded profile config "ha-218000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:26:36.733612   16418 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:26:36.737747   16418 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 04:26:36.744627   16418 start.go:297] selected driver: qemu2
	I0318 04:26:36.744634   16418 start.go:901] validating driver "qemu2" against &{Name:ha-218000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.28.4 ClusterName:ha-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:26:36.744695   16418 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:26:36.747059   16418 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:26:36.747113   16418 cni.go:84] Creating CNI manager for ""
	I0318 04:26:36.747118   16418 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0318 04:26:36.747170   16418 start.go:340] cluster config:
	{Name:ha-218000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-218000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:26:36.751791   16418 iso.go:125] acquiring lock: {Name:mkb8143674083e0c7a46a3ed751b3800392bcd24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:26:36.758793   16418 out.go:177] * Starting "ha-218000" primary control-plane node in "ha-218000" cluster
	I0318 04:26:36.762738   16418 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:26:36.762753   16418 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:26:36.762765   16418 cache.go:56] Caching tarball of preloaded images
	I0318 04:26:36.762836   16418 preload.go:173] Found /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:26:36.762842   16418 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:26:36.762901   16418 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/ha-218000/config.json ...
	I0318 04:26:36.763380   16418 start.go:360] acquireMachinesLock for ha-218000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:26:36.763417   16418 start.go:364] duration metric: took 30.542µs to acquireMachinesLock for "ha-218000"
	I0318 04:26:36.763428   16418 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:26:36.763432   16418 fix.go:54] fixHost starting: 
	I0318 04:26:36.763550   16418 fix.go:112] recreateIfNeeded on ha-218000: state=Stopped err=<nil>
	W0318 04:26:36.763559   16418 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:26:36.771743   16418 out.go:177] * Restarting existing qemu2 VM for "ha-218000" ...
	I0318 04:26:36.775763   16418 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/ha-218000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/ha-218000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/ha-218000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:62:1d:92:22:c9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/ha-218000/disk.qcow2
	I0318 04:26:36.777955   16418 main.go:141] libmachine: STDOUT: 
	I0318 04:26:36.777976   16418 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:26:36.778007   16418 fix.go:56] duration metric: took 14.574334ms for fixHost
	I0318 04:26:36.778012   16418 start.go:83] releasing machines lock for "ha-218000", held for 14.590042ms
	W0318 04:26:36.778020   16418 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:26:36.778054   16418 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:26:36.778059   16418 start.go:728] Will try again in 5 seconds ...
	I0318 04:26:41.779874   16418 start.go:360] acquireMachinesLock for ha-218000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:26:41.780231   16418 start.go:364] duration metric: took 291.541µs to acquireMachinesLock for "ha-218000"
	I0318 04:26:41.780359   16418 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:26:41.780379   16418 fix.go:54] fixHost starting: 
	I0318 04:26:41.781050   16418 fix.go:112] recreateIfNeeded on ha-218000: state=Stopped err=<nil>
	W0318 04:26:41.781080   16418 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:26:41.790375   16418 out.go:177] * Restarting existing qemu2 VM for "ha-218000" ...
	I0318 04:26:41.794574   16418 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/ha-218000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/ha-218000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/ha-218000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:62:1d:92:22:c9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/ha-218000/disk.qcow2
	I0318 04:26:41.804408   16418 main.go:141] libmachine: STDOUT: 
	I0318 04:26:41.804512   16418 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:26:41.804597   16418 fix.go:56] duration metric: took 24.216375ms for fixHost
	I0318 04:26:41.804614   16418 start.go:83] releasing machines lock for "ha-218000", held for 24.360083ms
	W0318 04:26:41.804754   16418 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-218000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-218000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:26:41.811494   16418 out.go:177] 
	W0318 04:26:41.814492   16418 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:26:41.814521   16418 out.go:239] * 
	* 
	W0318 04:26:41.817365   16418 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:26:41.826397   16418 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-218000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-218000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-218000 -n ha-218000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-218000 -n ha-218000: exit status 7 (34.551958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-218000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (7.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-218000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-218000 node delete m03 -v=7 --alsologtostderr: exit status 83 (42.832708ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-218000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-218000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:26:41.978421   16430 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:26:41.978835   16430 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:26:41.978839   16430 out.go:304] Setting ErrFile to fd 2...
	I0318 04:26:41.978841   16430 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:26:41.978990   16430 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:26:41.979201   16430 mustload.go:65] Loading cluster: ha-218000
	I0318 04:26:41.979389   16430 config.go:182] Loaded profile config "ha-218000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:26:41.983434   16430 out.go:177] * The control-plane node ha-218000 host is not running: state=Stopped
	I0318 04:26:41.986290   16430 out.go:177]   To start a cluster, run: "minikube start -p ha-218000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-218000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-218000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-218000 status -v=7 --alsologtostderr: exit status 7 (32.250666ms)

                                                
                                                
-- stdout --
	ha-218000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:26:42.021769   16432 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:26:42.021919   16432 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:26:42.021923   16432 out.go:304] Setting ErrFile to fd 2...
	I0318 04:26:42.021925   16432 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:26:42.022050   16432 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:26:42.022158   16432 out.go:298] Setting JSON to false
	I0318 04:26:42.022172   16432 mustload.go:65] Loading cluster: ha-218000
	I0318 04:26:42.022247   16432 notify.go:220] Checking for updates...
	I0318 04:26:42.022390   16432 config.go:182] Loaded profile config "ha-218000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:26:42.022397   16432 status.go:255] checking status of ha-218000 ...
	I0318 04:26:42.022608   16432 status.go:330] ha-218000 host status = "Stopped" (err=<nil>)
	I0318 04:26:42.022612   16432 status.go:343] host is not running, skipping remaining checks
	I0318 04:26:42.022615   16432 status.go:257] ha-218000 status: &{Name:ha-218000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-218000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-218000 -n ha-218000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-218000 -n ha-218000: exit status 7 (31.958959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-218000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-218000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-218000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-218000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-218000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-218000 -n ha-218000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-218000 -n ha-218000: exit status 7 (32.155334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-218000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (2.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-218000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-218000 stop -v=7 --alsologtostderr: (2.119002209s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-218000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-218000 status -v=7 --alsologtostderr: exit status 7 (67.354292ms)

                                                
                                                
-- stdout --
	ha-218000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:26:44.345084   16454 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:26:44.345259   16454 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:26:44.345264   16454 out.go:304] Setting ErrFile to fd 2...
	I0318 04:26:44.345267   16454 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:26:44.345409   16454 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:26:44.345560   16454 out.go:298] Setting JSON to false
	I0318 04:26:44.345581   16454 mustload.go:65] Loading cluster: ha-218000
	I0318 04:26:44.345609   16454 notify.go:220] Checking for updates...
	I0318 04:26:44.345787   16454 config.go:182] Loaded profile config "ha-218000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:26:44.345795   16454 status.go:255] checking status of ha-218000 ...
	I0318 04:26:44.346049   16454 status.go:330] ha-218000 host status = "Stopped" (err=<nil>)
	I0318 04:26:44.346053   16454 status.go:343] host is not running, skipping remaining checks
	I0318 04:26:44.346056   16454 status.go:257] ha-218000 status: &{Name:ha-218000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-218000 status -v=7 --alsologtostderr": ha-218000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-218000 status -v=7 --alsologtostderr": ha-218000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-218000 status -v=7 --alsologtostderr": ha-218000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-218000 -n ha-218000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-218000 -n ha-218000: exit status 7 (34.227167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-218000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (2.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-218000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-218000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.183415542s)

                                                
                                                
-- stdout --
	* [ha-218000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-218000" primary control-plane node in "ha-218000" cluster
	* Restarting existing qemu2 VM for "ha-218000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-218000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:26:44.411693   16458 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:26:44.411816   16458 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:26:44.411819   16458 out.go:304] Setting ErrFile to fd 2...
	I0318 04:26:44.411821   16458 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:26:44.411937   16458 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:26:44.412921   16458 out.go:298] Setting JSON to false
	I0318 04:26:44.428980   16458 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8777,"bootTime":1710752427,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:26:44.429040   16458 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:26:44.433141   16458 out.go:177] * [ha-218000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:26:44.441104   16458 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 04:26:44.441197   16458 notify.go:220] Checking for updates...
	I0318 04:26:44.445059   16458 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:26:44.448058   16458 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:26:44.451079   16458 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:26:44.454052   16458 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	I0318 04:26:44.457050   16458 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:26:44.460357   16458 config.go:182] Loaded profile config "ha-218000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:26:44.460617   16458 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:26:44.464992   16458 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 04:26:44.472094   16458 start.go:297] selected driver: qemu2
	I0318 04:26:44.472100   16458 start.go:901] validating driver "qemu2" against &{Name:ha-218000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.28.4 ClusterName:ha-218000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:26:44.472179   16458 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:26:44.474449   16458 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:26:44.474497   16458 cni.go:84] Creating CNI manager for ""
	I0318 04:26:44.474503   16458 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0318 04:26:44.474554   16458 start.go:340] cluster config:
	{Name:ha-218000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-218000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:26:44.478843   16458 iso.go:125] acquiring lock: {Name:mkb8143674083e0c7a46a3ed751b3800392bcd24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:26:44.486055   16458 out.go:177] * Starting "ha-218000" primary control-plane node in "ha-218000" cluster
	I0318 04:26:44.489911   16458 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:26:44.489926   16458 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:26:44.489937   16458 cache.go:56] Caching tarball of preloaded images
	I0318 04:26:44.489981   16458 preload.go:173] Found /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:26:44.489987   16458 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:26:44.490052   16458 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/ha-218000/config.json ...
	I0318 04:26:44.490504   16458 start.go:360] acquireMachinesLock for ha-218000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:26:44.490532   16458 start.go:364] duration metric: took 22.084µs to acquireMachinesLock for "ha-218000"
	I0318 04:26:44.490540   16458 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:26:44.490546   16458 fix.go:54] fixHost starting: 
	I0318 04:26:44.490658   16458 fix.go:112] recreateIfNeeded on ha-218000: state=Stopped err=<nil>
	W0318 04:26:44.490666   16458 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:26:44.499068   16458 out.go:177] * Restarting existing qemu2 VM for "ha-218000" ...
	I0318 04:26:44.503031   16458 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/ha-218000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/ha-218000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/ha-218000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:62:1d:92:22:c9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/ha-218000/disk.qcow2
	I0318 04:26:44.504951   16458 main.go:141] libmachine: STDOUT: 
	I0318 04:26:44.504971   16458 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:26:44.504998   16458 fix.go:56] duration metric: took 14.452667ms for fixHost
	I0318 04:26:44.505003   16458 start.go:83] releasing machines lock for "ha-218000", held for 14.468667ms
	W0318 04:26:44.505010   16458 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:26:44.505058   16458 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:26:44.505063   16458 start.go:728] Will try again in 5 seconds ...
	I0318 04:26:49.507078   16458 start.go:360] acquireMachinesLock for ha-218000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:26:49.507455   16458 start.go:364] duration metric: took 295.083µs to acquireMachinesLock for "ha-218000"
	I0318 04:26:49.507582   16458 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:26:49.507603   16458 fix.go:54] fixHost starting: 
	I0318 04:26:49.508255   16458 fix.go:112] recreateIfNeeded on ha-218000: state=Stopped err=<nil>
	W0318 04:26:49.508281   16458 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:26:49.513504   16458 out.go:177] * Restarting existing qemu2 VM for "ha-218000" ...
	I0318 04:26:49.517786   16458 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/ha-218000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/ha-218000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/ha-218000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:62:1d:92:22:c9 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/ha-218000/disk.qcow2
	I0318 04:26:49.527555   16458 main.go:141] libmachine: STDOUT: 
	I0318 04:26:49.527703   16458 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:26:49.527768   16458 fix.go:56] duration metric: took 20.168333ms for fixHost
	I0318 04:26:49.527788   16458 start.go:83] releasing machines lock for "ha-218000", held for 20.313208ms
	W0318 04:26:49.527948   16458 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-218000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-218000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:26:49.536565   16458 out.go:177] 
	W0318 04:26:49.540676   16458 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:26:49.540700   16458 out.go:239] * 
	* 
	W0318 04:26:49.543200   16458 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:26:49.550640   16458 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-218000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-218000 -n ha-218000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-218000 -n ha-218000: exit status 7 (67.513167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-218000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-218000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-218000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-218000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-218000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-218000 -n ha-218000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-218000 -n ha-218000: exit status 7 (31.9705ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-218000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-218000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-218000 --control-plane -v=7 --alsologtostderr: exit status 83 (43.439ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-218000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-218000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:26:49.773095   16474 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:26:49.773252   16474 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:26:49.773255   16474 out.go:304] Setting ErrFile to fd 2...
	I0318 04:26:49.773258   16474 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:26:49.773387   16474 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:26:49.773626   16474 mustload.go:65] Loading cluster: ha-218000
	I0318 04:26:49.773825   16474 config.go:182] Loaded profile config "ha-218000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:26:49.778316   16474 out.go:177] * The control-plane node ha-218000 host is not running: state=Stopped
	I0318 04:26:49.782255   16474 out.go:177]   To start a cluster, run: "minikube start -p ha-218000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-218000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-218000 -n ha-218000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-218000 -n ha-218000: exit status 7 (31.745167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-218000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-218000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-218000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-218000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-218000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-218000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-218000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-218000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"ha-218000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.28.4\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-218000 -n ha-218000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-218000 -n ha-218000: exit status 7 (31.528042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-218000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.10s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.98s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-594000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-594000 --driver=qemu2 : exit status 80 (9.904206333s)

                                                
                                                
-- stdout --
	* [image-594000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-594000" primary control-plane node in "image-594000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-594000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-594000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-594000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-594000 -n image-594000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-594000 -n image-594000: exit status 7 (70.223ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-594000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.98s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.78s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-510000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-510000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.780628125s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2517fdbb-ec24-40c9-82e8-419c15766ffe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-510000] minikube v1.32.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"cf200f1c-a608-4136-89db-46f726092d6b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18429"}}
	{"specversion":"1.0","id":"94d54285-ecfd-499f-9adb-33f258a3c3d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig"}}
	{"specversion":"1.0","id":"864a1720-3887-42dc-8518-4b38c0fbe3df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"bdf73c2c-997d-4b0c-a666-936199ce9dd0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"cb3a53f3-df32-4ee8-a7ca-76a100f2f988","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube"}}
	{"specversion":"1.0","id":"3fcd223a-f522-421f-b9f5-33a0953db247","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"76266aad-0f19-4192-8f4a-1cd745c5bae5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"46231242-39ba-4f5d-ad97-c6f3b4aa7b6c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"2bc7a981-2db2-4a74-b33d-1e081dd2b1d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-510000\" primary control-plane node in \"json-output-510000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"fe28b0ed-f3be-4cf3-9823-a7ab62e8be1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"160f567b-24a3-40b7-9769-a050f3cafcd9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-510000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"f051e003-c2d1-448c-9907-b07b3959c488","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"ce6638f4-b02e-432e-a0ea-4d21ae0e7046","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"51ca1ff4-18f8-48ba-833f-fb50911f9fd1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-510000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"5fe3ca9e-954e-4ed3-834e-87fa6c671937","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"bc99162f-950e-4427-9912-365ae3fd7924","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-510000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.78s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-510000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-510000 --output=json --user=testUser: exit status 83 (84.312167ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"52dfa544-5b1a-4c69-b773-5b7cc676bf6a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-510000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"9a947f36-87cc-48b9-aed4-d2147b0b2404","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-510000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-510000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-510000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-510000 --output=json --user=testUser: exit status 83 (49.132459ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-510000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-510000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-510000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-510000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.82s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-882000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-882000 --driver=qemu2 : exit status 80 (10.373510416s)

                                                
                                                
-- stdout --
	* [first-882000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-882000" primary control-plane node in "first-882000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-882000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-882000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-882000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-03-18 04:27:24.450702 -0700 PDT m=+525.321363417
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-884000 -n second-884000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-884000 -n second-884000: exit status 85 (80.124084ms)

                                                
                                                
-- stdout --
	* Profile "second-884000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-884000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-884000" host is not running, skipping log retrieval (state="* Profile \"second-884000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-884000\"")
helpers_test.go:175: Cleaning up "second-884000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-884000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-03-18 04:27:24.762408 -0700 PDT m=+525.633079584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-882000 -n first-882000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-882000 -n first-882000: exit status 7 (31.376791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-882000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-882000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-882000
--- FAIL: TestMinikubeProfile (10.82s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.96s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-376000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-376000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.88739525s)

                                                
                                                
-- stdout --
	* [mount-start-1-376000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-376000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-376000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-376000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-376000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-376000 -n mount-start-1-376000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-376000 -n mount-start-1-376000: exit status 7 (69.663458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-376000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.96s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (9.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-969000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-969000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.773196291s)

                                                
                                                
-- stdout --
	* [multinode-969000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-969000" primary control-plane node in "multinode-969000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-969000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:27:36.211041   16637 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:27:36.211180   16637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:27:36.211184   16637 out.go:304] Setting ErrFile to fd 2...
	I0318 04:27:36.211186   16637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:27:36.211305   16637 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:27:36.212486   16637 out.go:298] Setting JSON to false
	I0318 04:27:36.228808   16637 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8829,"bootTime":1710752427,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:27:36.228893   16637 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:27:36.234055   16637 out.go:177] * [multinode-969000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:27:36.240979   16637 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 04:27:36.241014   16637 notify.go:220] Checking for updates...
	I0318 04:27:36.245020   16637 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:27:36.248944   16637 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:27:36.251990   16637 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:27:36.255033   16637 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	I0318 04:27:36.257993   16637 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:27:36.261165   16637 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:27:36.264996   16637 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 04:27:36.271977   16637 start.go:297] selected driver: qemu2
	I0318 04:27:36.271984   16637 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:27:36.271990   16637 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:27:36.274263   16637 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:27:36.277033   16637 out.go:177] * Automatically selected the socket_vmnet network
	I0318 04:27:36.280040   16637 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:27:36.280077   16637 cni.go:84] Creating CNI manager for ""
	I0318 04:27:36.280081   16637 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0318 04:27:36.280086   16637 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0318 04:27:36.280122   16637 start.go:340] cluster config:
	{Name:multinode-969000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-969000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:27:36.284430   16637 iso.go:125] acquiring lock: {Name:mkb8143674083e0c7a46a3ed751b3800392bcd24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:27:36.292007   16637 out.go:177] * Starting "multinode-969000" primary control-plane node in "multinode-969000" cluster
	I0318 04:27:36.295946   16637 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:27:36.295977   16637 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:27:36.295988   16637 cache.go:56] Caching tarball of preloaded images
	I0318 04:27:36.296053   16637 preload.go:173] Found /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:27:36.296059   16637 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:27:36.296295   16637 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/multinode-969000/config.json ...
	I0318 04:27:36.296307   16637 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/multinode-969000/config.json: {Name:mk3f33a4e5874b0bd3f3dd29e03cecc3418e434a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:27:36.296523   16637 start.go:360] acquireMachinesLock for multinode-969000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:27:36.296554   16637 start.go:364] duration metric: took 25.042µs to acquireMachinesLock for "multinode-969000"
	I0318 04:27:36.296567   16637 start.go:93] Provisioning new machine with config: &{Name:multinode-969000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.28.4 ClusterName:multinode-969000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:27:36.296604   16637 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:27:36.304950   16637 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 04:27:36.321899   16637 start.go:159] libmachine.API.Create for "multinode-969000" (driver="qemu2")
	I0318 04:27:36.321926   16637 client.go:168] LocalClient.Create starting
	I0318 04:27:36.321993   16637 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem
	I0318 04:27:36.322022   16637 main.go:141] libmachine: Decoding PEM data...
	I0318 04:27:36.322037   16637 main.go:141] libmachine: Parsing certificate...
	I0318 04:27:36.322081   16637 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem
	I0318 04:27:36.322105   16637 main.go:141] libmachine: Decoding PEM data...
	I0318 04:27:36.322111   16637 main.go:141] libmachine: Parsing certificate...
	I0318 04:27:36.322458   16637 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18429-15072/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:27:36.464808   16637 main.go:141] libmachine: Creating SSH key...
	I0318 04:27:36.506184   16637 main.go:141] libmachine: Creating Disk image...
	I0318 04:27:36.506190   16637 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:27:36.506390   16637 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/multinode-969000/disk.qcow2.raw /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/multinode-969000/disk.qcow2
	I0318 04:27:36.518317   16637 main.go:141] libmachine: STDOUT: 
	I0318 04:27:36.518338   16637 main.go:141] libmachine: STDERR: 
	I0318 04:27:36.518411   16637 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/multinode-969000/disk.qcow2 +20000M
	I0318 04:27:36.528820   16637 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:27:36.528840   16637 main.go:141] libmachine: STDERR: 
	I0318 04:27:36.528856   16637 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/multinode-969000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/multinode-969000/disk.qcow2
	I0318 04:27:36.528860   16637 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:27:36.528892   16637 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/multinode-969000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/multinode-969000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/multinode-969000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:7e:11:aa:6a:d7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/multinode-969000/disk.qcow2
	I0318 04:27:36.530576   16637 main.go:141] libmachine: STDOUT: 
	I0318 04:27:36.530594   16637 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:27:36.530613   16637 client.go:171] duration metric: took 208.688458ms to LocalClient.Create
	I0318 04:27:38.532782   16637 start.go:128] duration metric: took 2.236215291s to createHost
	I0318 04:27:38.532857   16637 start.go:83] releasing machines lock for "multinode-969000", held for 2.236367292s
	W0318 04:27:38.532925   16637 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:27:38.544076   16637 out.go:177] * Deleting "multinode-969000" in qemu2 ...
	W0318 04:27:38.572664   16637 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:27:38.572698   16637 start.go:728] Will try again in 5 seconds ...
	I0318 04:27:43.574747   16637 start.go:360] acquireMachinesLock for multinode-969000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:27:43.575270   16637 start.go:364] duration metric: took 408.209µs to acquireMachinesLock for "multinode-969000"
	I0318 04:27:43.575394   16637 start.go:93] Provisioning new machine with config: &{Name:multinode-969000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.28.4 ClusterName:multinode-969000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:27:43.575699   16637 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:27:43.586353   16637 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 04:27:43.635702   16637 start.go:159] libmachine.API.Create for "multinode-969000" (driver="qemu2")
	I0318 04:27:43.635761   16637 client.go:168] LocalClient.Create starting
	I0318 04:27:43.635880   16637 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem
	I0318 04:27:43.635945   16637 main.go:141] libmachine: Decoding PEM data...
	I0318 04:27:43.635958   16637 main.go:141] libmachine: Parsing certificate...
	I0318 04:27:43.636015   16637 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem
	I0318 04:27:43.636061   16637 main.go:141] libmachine: Decoding PEM data...
	I0318 04:27:43.636071   16637 main.go:141] libmachine: Parsing certificate...
	I0318 04:27:43.636585   16637 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18429-15072/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:27:43.788448   16637 main.go:141] libmachine: Creating SSH key...
	I0318 04:27:43.882392   16637 main.go:141] libmachine: Creating Disk image...
	I0318 04:27:43.882397   16637 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:27:43.882592   16637 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/multinode-969000/disk.qcow2.raw /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/multinode-969000/disk.qcow2
	I0318 04:27:43.895099   16637 main.go:141] libmachine: STDOUT: 
	I0318 04:27:43.895120   16637 main.go:141] libmachine: STDERR: 
	I0318 04:27:43.895184   16637 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/multinode-969000/disk.qcow2 +20000M
	I0318 04:27:43.905861   16637 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:27:43.905888   16637 main.go:141] libmachine: STDERR: 
	I0318 04:27:43.905897   16637 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/multinode-969000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/multinode-969000/disk.qcow2
	I0318 04:27:43.905901   16637 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:27:43.905930   16637 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/multinode-969000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/multinode-969000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/multinode-969000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:33:ef:22:a5:cb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/multinode-969000/disk.qcow2
	I0318 04:27:43.907664   16637 main.go:141] libmachine: STDOUT: 
	I0318 04:27:43.907681   16637 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:27:43.907697   16637 client.go:171] duration metric: took 271.939084ms to LocalClient.Create
	I0318 04:27:45.909867   16637 start.go:128] duration metric: took 2.334203833s to createHost
	I0318 04:27:45.909976   16637 start.go:83] releasing machines lock for "multinode-969000", held for 2.334747125s
	W0318 04:27:45.910292   16637 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-969000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-969000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:27:45.924841   16637 out.go:177] 
	W0318 04:27:45.928986   16637 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:27:45.929030   16637 out.go:239] * 
	* 
	W0318 04:27:45.931590   16637 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:27:45.939852   16637 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-969000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000: exit status 7 (69.62025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (9.85s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (119.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-969000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-969000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (59.556ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-969000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-969000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-969000 -- rollout status deployment/busybox: exit status 1 (57.799542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-969000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (58.695042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-969000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.148334ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-969000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.414333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-969000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.129375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-969000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.472167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-969000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.466417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-969000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.098833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-969000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.290958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-969000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.770166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-969000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.747583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-969000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.50625ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-969000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.708917ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-969000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-969000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-969000 -- exec  -- nslookup kubernetes.io: exit status 1 (57.906916ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-969000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-969000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-969000 -- exec  -- nslookup kubernetes.default: exit status 1 (57.302791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-969000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-969000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-969000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (57.253166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-969000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000: exit status 7 (31.679125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (119.63s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-969000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.432875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-969000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000: exit status 7 (31.833333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-969000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-969000 -v 3 --alsologtostderr: exit status 83 (44.502125ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-969000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-969000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:29:45.769210   16732 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:29:45.769369   16732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:29:45.769373   16732 out.go:304] Setting ErrFile to fd 2...
	I0318 04:29:45.769375   16732 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:29:45.769499   16732 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:29:45.769746   16732 mustload.go:65] Loading cluster: multinode-969000
	I0318 04:29:45.769948   16732 config.go:182] Loaded profile config "multinode-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:29:45.775405   16732 out.go:177] * The control-plane node multinode-969000 host is not running: state=Stopped
	I0318 04:29:45.779204   16732 out.go:177]   To start a cluster, run: "minikube start -p multinode-969000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-969000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000: exit status 7 (31.816375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-969000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-969000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (25.642916ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-969000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-969000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-969000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000: exit status 7 (31.4185ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-969000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-969000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-969000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.28.4\",\"ClusterName\":\"multinode-969000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.28.4\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000: exit status 7 (32.062333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-969000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-969000 status --output json --alsologtostderr: exit status 7 (31.512ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-969000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:29:46.008737   16746 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:29:46.008876   16746 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:29:46.008879   16746 out.go:304] Setting ErrFile to fd 2...
	I0318 04:29:46.008881   16746 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:29:46.009018   16746 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:29:46.009137   16746 out.go:298] Setting JSON to true
	I0318 04:29:46.009149   16746 mustload.go:65] Loading cluster: multinode-969000
	I0318 04:29:46.009215   16746 notify.go:220] Checking for updates...
	I0318 04:29:46.009366   16746 config.go:182] Loaded profile config "multinode-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:29:46.009374   16746 status.go:255] checking status of multinode-969000 ...
	I0318 04:29:46.009566   16746 status.go:330] multinode-969000 host status = "Stopped" (err=<nil>)
	I0318 04:29:46.009570   16746 status.go:343] host is not running, skipping remaining checks
	I0318 04:29:46.009573   16746 status.go:257] multinode-969000 status: &{Name:multinode-969000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-969000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000: exit status 7 (32.203833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-969000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-969000 node stop m03: exit status 85 (49.061959ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-969000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-969000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-969000 status: exit status 7 (31.513166ms)

                                                
                                                
-- stdout --
	multinode-969000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-969000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-969000 status --alsologtostderr: exit status 7 (31.71ms)

                                                
                                                
-- stdout --
	multinode-969000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:29:46.154000   16754 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:29:46.154174   16754 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:29:46.154177   16754 out.go:304] Setting ErrFile to fd 2...
	I0318 04:29:46.154180   16754 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:29:46.154320   16754 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:29:46.154439   16754 out.go:298] Setting JSON to false
	I0318 04:29:46.154451   16754 mustload.go:65] Loading cluster: multinode-969000
	I0318 04:29:46.154509   16754 notify.go:220] Checking for updates...
	I0318 04:29:46.154650   16754 config.go:182] Loaded profile config "multinode-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:29:46.154657   16754 status.go:255] checking status of multinode-969000 ...
	I0318 04:29:46.154865   16754 status.go:330] multinode-969000 host status = "Stopped" (err=<nil>)
	I0318 04:29:46.154869   16754 status.go:343] host is not running, skipping remaining checks
	I0318 04:29:46.154872   16754 status.go:257] multinode-969000 status: &{Name:multinode-969000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-969000 status --alsologtostderr": multinode-969000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000: exit status 7 (31.482ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (57.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-969000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-969000 node start m03 -v=7 --alsologtostderr: exit status 85 (48.489125ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:29:46.218333   16758 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:29:46.218734   16758 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:29:46.218737   16758 out.go:304] Setting ErrFile to fd 2...
	I0318 04:29:46.218740   16758 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:29:46.218897   16758 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:29:46.219117   16758 mustload.go:65] Loading cluster: multinode-969000
	I0318 04:29:46.219316   16758 config.go:182] Loaded profile config "multinode-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:29:46.223454   16758 out.go:177] 
	W0318 04:29:46.227430   16758 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0318 04:29:46.227436   16758 out.go:239] * 
	* 
	W0318 04:29:46.229583   16758 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:29:46.233410   16758 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0318 04:29:46.218333   16758 out.go:291] Setting OutFile to fd 1 ...
I0318 04:29:46.218734   16758 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 04:29:46.218737   16758 out.go:304] Setting ErrFile to fd 2...
I0318 04:29:46.218740   16758 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 04:29:46.218897   16758 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
I0318 04:29:46.219117   16758 mustload.go:65] Loading cluster: multinode-969000
I0318 04:29:46.219316   16758 config.go:182] Loaded profile config "multinode-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
I0318 04:29:46.223454   16758 out.go:177] 
W0318 04:29:46.227430   16758 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0318 04:29:46.227436   16758 out.go:239] * 
* 
W0318 04:29:46.229583   16758 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0318 04:29:46.233410   16758 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-969000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-969000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-969000 status -v=7 --alsologtostderr: exit status 7 (32.485833ms)

                                                
                                                
-- stdout --
	multinode-969000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:29:46.266931   16760 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:29:46.267075   16760 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:29:46.267078   16760 out.go:304] Setting ErrFile to fd 2...
	I0318 04:29:46.267081   16760 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:29:46.267191   16760 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:29:46.267314   16760 out.go:298] Setting JSON to false
	I0318 04:29:46.267325   16760 mustload.go:65] Loading cluster: multinode-969000
	I0318 04:29:46.267384   16760 notify.go:220] Checking for updates...
	I0318 04:29:46.267503   16760 config.go:182] Loaded profile config "multinode-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:29:46.267510   16760 status.go:255] checking status of multinode-969000 ...
	I0318 04:29:46.267735   16760 status.go:330] multinode-969000 host status = "Stopped" (err=<nil>)
	I0318 04:29:46.267739   16760 status.go:343] host is not running, skipping remaining checks
	I0318 04:29:46.267741   16760 status.go:257] multinode-969000 status: &{Name:multinode-969000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-969000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-969000 status -v=7 --alsologtostderr: exit status 7 (80.621375ms)

                                                
                                                
-- stdout --
	multinode-969000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:29:47.212715   16762 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:29:47.212901   16762 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:29:47.212905   16762 out.go:304] Setting ErrFile to fd 2...
	I0318 04:29:47.212908   16762 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:29:47.213066   16762 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:29:47.213251   16762 out.go:298] Setting JSON to false
	I0318 04:29:47.213269   16762 mustload.go:65] Loading cluster: multinode-969000
	I0318 04:29:47.213302   16762 notify.go:220] Checking for updates...
	I0318 04:29:47.213529   16762 config.go:182] Loaded profile config "multinode-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:29:47.213538   16762 status.go:255] checking status of multinode-969000 ...
	I0318 04:29:47.213814   16762 status.go:330] multinode-969000 host status = "Stopped" (err=<nil>)
	I0318 04:29:47.213819   16762 status.go:343] host is not running, skipping remaining checks
	I0318 04:29:47.213822   16762 status.go:257] multinode-969000 status: &{Name:multinode-969000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-969000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-969000 status -v=7 --alsologtostderr: exit status 7 (74.569667ms)

                                                
                                                
-- stdout --
	multinode-969000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:29:49.233330   16764 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:29:49.233549   16764 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:29:49.233553   16764 out.go:304] Setting ErrFile to fd 2...
	I0318 04:29:49.233556   16764 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:29:49.233709   16764 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:29:49.233849   16764 out.go:298] Setting JSON to false
	I0318 04:29:49.233869   16764 mustload.go:65] Loading cluster: multinode-969000
	I0318 04:29:49.233909   16764 notify.go:220] Checking for updates...
	I0318 04:29:49.234102   16764 config.go:182] Loaded profile config "multinode-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:29:49.234110   16764 status.go:255] checking status of multinode-969000 ...
	I0318 04:29:49.234343   16764 status.go:330] multinode-969000 host status = "Stopped" (err=<nil>)
	I0318 04:29:49.234347   16764 status.go:343] host is not running, skipping remaining checks
	I0318 04:29:49.234350   16764 status.go:257] multinode-969000 status: &{Name:multinode-969000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-969000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-969000 status -v=7 --alsologtostderr: exit status 7 (78.372334ms)

                                                
                                                
-- stdout --
	multinode-969000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:29:52.324253   16766 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:29:52.324437   16766 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:29:52.324441   16766 out.go:304] Setting ErrFile to fd 2...
	I0318 04:29:52.324444   16766 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:29:52.324603   16766 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:29:52.324750   16766 out.go:298] Setting JSON to false
	I0318 04:29:52.324765   16766 mustload.go:65] Loading cluster: multinode-969000
	I0318 04:29:52.324800   16766 notify.go:220] Checking for updates...
	I0318 04:29:52.325021   16766 config.go:182] Loaded profile config "multinode-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:29:52.325030   16766 status.go:255] checking status of multinode-969000 ...
	I0318 04:29:52.325287   16766 status.go:330] multinode-969000 host status = "Stopped" (err=<nil>)
	I0318 04:29:52.325291   16766 status.go:343] host is not running, skipping remaining checks
	I0318 04:29:52.325295   16766 status.go:257] multinode-969000 status: &{Name:multinode-969000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-969000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-969000 status -v=7 --alsologtostderr: exit status 7 (75.614417ms)

                                                
                                                
-- stdout --
	multinode-969000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:29:56.012495   16768 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:29:56.012676   16768 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:29:56.012680   16768 out.go:304] Setting ErrFile to fd 2...
	I0318 04:29:56.012683   16768 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:29:56.012851   16768 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:29:56.013010   16768 out.go:298] Setting JSON to false
	I0318 04:29:56.013030   16768 mustload.go:65] Loading cluster: multinode-969000
	I0318 04:29:56.013061   16768 notify.go:220] Checking for updates...
	I0318 04:29:56.013307   16768 config.go:182] Loaded profile config "multinode-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:29:56.013316   16768 status.go:255] checking status of multinode-969000 ...
	I0318 04:29:56.013572   16768 status.go:330] multinode-969000 host status = "Stopped" (err=<nil>)
	I0318 04:29:56.013576   16768 status.go:343] host is not running, skipping remaining checks
	I0318 04:29:56.013579   16768 status.go:257] multinode-969000 status: &{Name:multinode-969000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-969000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-969000 status -v=7 --alsologtostderr: exit status 7 (76.205458ms)

                                                
                                                
-- stdout --
	multinode-969000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:29:59.592198   16770 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:29:59.592350   16770 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:29:59.592355   16770 out.go:304] Setting ErrFile to fd 2...
	I0318 04:29:59.592358   16770 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:29:59.592524   16770 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:29:59.592680   16770 out.go:298] Setting JSON to false
	I0318 04:29:59.592696   16770 mustload.go:65] Loading cluster: multinode-969000
	I0318 04:29:59.592728   16770 notify.go:220] Checking for updates...
	I0318 04:29:59.592948   16770 config.go:182] Loaded profile config "multinode-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:29:59.592959   16770 status.go:255] checking status of multinode-969000 ...
	I0318 04:29:59.593174   16770 status.go:330] multinode-969000 host status = "Stopped" (err=<nil>)
	I0318 04:29:59.593178   16770 status.go:343] host is not running, skipping remaining checks
	I0318 04:29:59.593181   16770 status.go:257] multinode-969000 status: &{Name:multinode-969000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-969000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-969000 status -v=7 --alsologtostderr: exit status 7 (77.623375ms)

                                                
                                                
-- stdout --
	multinode-969000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:30:07.813476   16787 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:30:07.813669   16787 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:30:07.813674   16787 out.go:304] Setting ErrFile to fd 2...
	I0318 04:30:07.813677   16787 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:30:07.813864   16787 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:30:07.814013   16787 out.go:298] Setting JSON to false
	I0318 04:30:07.814028   16787 mustload.go:65] Loading cluster: multinode-969000
	I0318 04:30:07.814064   16787 notify.go:220] Checking for updates...
	I0318 04:30:07.814263   16787 config.go:182] Loaded profile config "multinode-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:30:07.814272   16787 status.go:255] checking status of multinode-969000 ...
	I0318 04:30:07.814547   16787 status.go:330] multinode-969000 host status = "Stopped" (err=<nil>)
	I0318 04:30:07.814552   16787 status.go:343] host is not running, skipping remaining checks
	I0318 04:30:07.814555   16787 status.go:257] multinode-969000 status: &{Name:multinode-969000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-969000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-969000 status -v=7 --alsologtostderr: exit status 7 (77.454625ms)

                                                
                                                
-- stdout --
	multinode-969000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:30:20.652132   16789 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:30:20.652336   16789 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:30:20.652340   16789 out.go:304] Setting ErrFile to fd 2...
	I0318 04:30:20.652342   16789 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:30:20.652518   16789 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:30:20.652696   16789 out.go:298] Setting JSON to false
	I0318 04:30:20.652711   16789 mustload.go:65] Loading cluster: multinode-969000
	I0318 04:30:20.652751   16789 notify.go:220] Checking for updates...
	I0318 04:30:20.652961   16789 config.go:182] Loaded profile config "multinode-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:30:20.652969   16789 status.go:255] checking status of multinode-969000 ...
	I0318 04:30:20.653240   16789 status.go:330] multinode-969000 host status = "Stopped" (err=<nil>)
	I0318 04:30:20.653245   16789 status.go:343] host is not running, skipping remaining checks
	I0318 04:30:20.653248   16789 status.go:257] multinode-969000 status: &{Name:multinode-969000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-969000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-969000 status -v=7 --alsologtostderr: exit status 7 (77.056333ms)

                                                
                                                
-- stdout --
	multinode-969000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:30:43.367376   16794 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:30:43.367569   16794 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:30:43.367573   16794 out.go:304] Setting ErrFile to fd 2...
	I0318 04:30:43.367576   16794 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:30:43.367745   16794 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:30:43.367920   16794 out.go:298] Setting JSON to false
	I0318 04:30:43.367936   16794 mustload.go:65] Loading cluster: multinode-969000
	I0318 04:30:43.367965   16794 notify.go:220] Checking for updates...
	I0318 04:30:43.368195   16794 config.go:182] Loaded profile config "multinode-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:30:43.368203   16794 status.go:255] checking status of multinode-969000 ...
	I0318 04:30:43.368481   16794 status.go:330] multinode-969000 host status = "Stopped" (err=<nil>)
	I0318 04:30:43.368486   16794 status.go:343] host is not running, skipping remaining checks
	I0318 04:30:43.368489   16794 status.go:257] multinode-969000 status: &{Name:multinode-969000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-969000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000: exit status 7 (34.151708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (57.22s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-969000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-969000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-969000: (2.895603125s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-969000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-969000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.233873625s)

                                                
                                                
-- stdout --
	* [multinode-969000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-969000" primary control-plane node in "multinode-969000" cluster
	* Restarting existing qemu2 VM for "multinode-969000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-969000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:30:46.399405   16818 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:30:46.399569   16818 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:30:46.399574   16818 out.go:304] Setting ErrFile to fd 2...
	I0318 04:30:46.399576   16818 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:30:46.399734   16818 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:30:46.400876   16818 out.go:298] Setting JSON to false
	I0318 04:30:46.419785   16818 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":9019,"bootTime":1710752427,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:30:46.419844   16818 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:30:46.424721   16818 out.go:177] * [multinode-969000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:30:46.432885   16818 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 04:30:46.432937   16818 notify.go:220] Checking for updates...
	I0318 04:30:46.440843   16818 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:30:46.443840   16818 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:30:46.446780   16818 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:30:46.449799   16818 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	I0318 04:30:46.452823   16818 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:30:46.456095   16818 config.go:182] Loaded profile config "multinode-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:30:46.456150   16818 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:30:46.460819   16818 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 04:30:46.467782   16818 start.go:297] selected driver: qemu2
	I0318 04:30:46.467788   16818 start.go:901] validating driver "qemu2" against &{Name:multinode-969000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:multinode-969000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:30:46.467840   16818 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:30:46.470240   16818 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:30:46.470280   16818 cni.go:84] Creating CNI manager for ""
	I0318 04:30:46.470287   16818 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0318 04:30:46.470334   16818 start.go:340] cluster config:
	{Name:multinode-969000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-969000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:30:46.474927   16818 iso.go:125] acquiring lock: {Name:mkb8143674083e0c7a46a3ed751b3800392bcd24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:30:46.481867   16818 out.go:177] * Starting "multinode-969000" primary control-plane node in "multinode-969000" cluster
	I0318 04:30:46.485789   16818 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:30:46.485807   16818 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:30:46.485815   16818 cache.go:56] Caching tarball of preloaded images
	I0318 04:30:46.485866   16818 preload.go:173] Found /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:30:46.485871   16818 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:30:46.485929   16818 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/multinode-969000/config.json ...
	I0318 04:30:46.486384   16818 start.go:360] acquireMachinesLock for multinode-969000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:30:46.486418   16818 start.go:364] duration metric: took 27.583µs to acquireMachinesLock for "multinode-969000"
	I0318 04:30:46.486433   16818 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:30:46.486439   16818 fix.go:54] fixHost starting: 
	I0318 04:30:46.486577   16818 fix.go:112] recreateIfNeeded on multinode-969000: state=Stopped err=<nil>
	W0318 04:30:46.486587   16818 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:30:46.490669   16818 out.go:177] * Restarting existing qemu2 VM for "multinode-969000" ...
	I0318 04:30:46.498870   16818 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/multinode-969000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/multinode-969000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/multinode-969000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:33:ef:22:a5:cb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/multinode-969000/disk.qcow2
	I0318 04:30:46.501055   16818 main.go:141] libmachine: STDOUT: 
	I0318 04:30:46.501081   16818 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:30:46.501111   16818 fix.go:56] duration metric: took 14.672167ms for fixHost
	I0318 04:30:46.501118   16818 start.go:83] releasing machines lock for "multinode-969000", held for 14.695125ms
	W0318 04:30:46.501125   16818 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:30:46.501163   16818 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:30:46.501168   16818 start.go:728] Will try again in 5 seconds ...
	I0318 04:30:51.503114   16818 start.go:360] acquireMachinesLock for multinode-969000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:30:51.503476   16818 start.go:364] duration metric: took 290.625µs to acquireMachinesLock for "multinode-969000"
	I0318 04:30:51.503600   16818 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:30:51.503618   16818 fix.go:54] fixHost starting: 
	I0318 04:30:51.504286   16818 fix.go:112] recreateIfNeeded on multinode-969000: state=Stopped err=<nil>
	W0318 04:30:51.504312   16818 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:30:51.513624   16818 out.go:177] * Restarting existing qemu2 VM for "multinode-969000" ...
	I0318 04:30:51.517815   16818 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/multinode-969000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/multinode-969000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/multinode-969000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:33:ef:22:a5:cb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/multinode-969000/disk.qcow2
	I0318 04:30:51.527438   16818 main.go:141] libmachine: STDOUT: 
	I0318 04:30:51.527506   16818 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:30:51.527643   16818 fix.go:56] duration metric: took 23.991708ms for fixHost
	I0318 04:30:51.527668   16818 start.go:83] releasing machines lock for "multinode-969000", held for 24.171167ms
	W0318 04:30:51.527829   16818 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-969000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-969000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:30:51.535682   16818 out.go:177] 
	W0318 04:30:51.539756   16818 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:30:51.539791   16818 out.go:239] * 
	* 
	W0318 04:30:51.542590   16818 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:30:51.551647   16818 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-969000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-969000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000: exit status 7 (34.146708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.27s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-969000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-969000 node delete m03: exit status 83 (44.194208ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-969000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-969000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-969000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-969000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-969000 status --alsologtostderr: exit status 7 (32.180875ms)

                                                
                                                
-- stdout --
	multinode-969000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:30:51.747039   16832 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:30:51.747202   16832 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:30:51.747205   16832 out.go:304] Setting ErrFile to fd 2...
	I0318 04:30:51.747208   16832 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:30:51.747348   16832 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:30:51.747476   16832 out.go:298] Setting JSON to false
	I0318 04:30:51.747491   16832 mustload.go:65] Loading cluster: multinode-969000
	I0318 04:30:51.747545   16832 notify.go:220] Checking for updates...
	I0318 04:30:51.747662   16832 config.go:182] Loaded profile config "multinode-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:30:51.747669   16832 status.go:255] checking status of multinode-969000 ...
	I0318 04:30:51.747892   16832 status.go:330] multinode-969000 host status = "Stopped" (err=<nil>)
	I0318 04:30:51.747896   16832 status.go:343] host is not running, skipping remaining checks
	I0318 04:30:51.747905   16832 status.go:257] multinode-969000 status: &{Name:multinode-969000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-969000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000: exit status 7 (31.58075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (2.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-969000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-969000 stop: (2.055368042s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-969000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-969000 status: exit status 7 (66.211208ms)

                                                
                                                
-- stdout --
	multinode-969000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-969000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-969000 status --alsologtostderr: exit status 7 (33.555541ms)

                                                
                                                
-- stdout --
	multinode-969000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:30:53.934367   16850 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:30:53.934512   16850 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:30:53.934515   16850 out.go:304] Setting ErrFile to fd 2...
	I0318 04:30:53.934517   16850 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:30:53.934647   16850 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:30:53.934758   16850 out.go:298] Setting JSON to false
	I0318 04:30:53.934770   16850 mustload.go:65] Loading cluster: multinode-969000
	I0318 04:30:53.934833   16850 notify.go:220] Checking for updates...
	I0318 04:30:53.934976   16850 config.go:182] Loaded profile config "multinode-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:30:53.934983   16850 status.go:255] checking status of multinode-969000 ...
	I0318 04:30:53.935193   16850 status.go:330] multinode-969000 host status = "Stopped" (err=<nil>)
	I0318 04:30:53.935197   16850 status.go:343] host is not running, skipping remaining checks
	I0318 04:30:53.935199   16850 status.go:257] multinode-969000 status: &{Name:multinode-969000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-969000 status --alsologtostderr": multinode-969000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-969000 status --alsologtostderr": multinode-969000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000: exit status 7 (32.038208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (2.19s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-969000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-969000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.20332325s)

                                                
                                                
-- stdout --
	* [multinode-969000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-969000" primary control-plane node in "multinode-969000" cluster
	* Restarting existing qemu2 VM for "multinode-969000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-969000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:30:53.998089   16854 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:30:53.998212   16854 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:30:53.998215   16854 out.go:304] Setting ErrFile to fd 2...
	I0318 04:30:53.998217   16854 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:30:53.998340   16854 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:30:53.999325   16854 out.go:298] Setting JSON to false
	I0318 04:30:54.015577   16854 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":9027,"bootTime":1710752427,"procs":474,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:30:54.015634   16854 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:30:54.019585   16854 out.go:177] * [multinode-969000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:30:54.027433   16854 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 04:30:54.027491   16854 notify.go:220] Checking for updates...
	I0318 04:30:54.035251   16854 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:30:54.038387   16854 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:30:54.041421   16854 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:30:54.049362   16854 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	I0318 04:30:54.056394   16854 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:30:54.060639   16854 config.go:182] Loaded profile config "multinode-969000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:30:54.060906   16854 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:30:54.065376   16854 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 04:30:54.071416   16854 start.go:297] selected driver: qemu2
	I0318 04:30:54.071422   16854 start.go:901] validating driver "qemu2" against &{Name:multinode-969000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:multinode-969000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:30:54.071479   16854 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:30:54.073877   16854 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:30:54.073925   16854 cni.go:84] Creating CNI manager for ""
	I0318 04:30:54.073931   16854 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0318 04:30:54.073978   16854 start.go:340] cluster config:
	{Name:multinode-969000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-969000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:30:54.078668   16854 iso.go:125] acquiring lock: {Name:mkb8143674083e0c7a46a3ed751b3800392bcd24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:30:54.086391   16854 out.go:177] * Starting "multinode-969000" primary control-plane node in "multinode-969000" cluster
	I0318 04:30:54.090409   16854 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:30:54.090423   16854 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:30:54.090434   16854 cache.go:56] Caching tarball of preloaded images
	I0318 04:30:54.090487   16854 preload.go:173] Found /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:30:54.090493   16854 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:30:54.090547   16854 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/multinode-969000/config.json ...
	I0318 04:30:54.091006   16854 start.go:360] acquireMachinesLock for multinode-969000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:30:54.091038   16854 start.go:364] duration metric: took 25.625µs to acquireMachinesLock for "multinode-969000"
	I0318 04:30:54.091049   16854 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:30:54.091056   16854 fix.go:54] fixHost starting: 
	I0318 04:30:54.091182   16854 fix.go:112] recreateIfNeeded on multinode-969000: state=Stopped err=<nil>
	W0318 04:30:54.091191   16854 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:30:54.099416   16854 out.go:177] * Restarting existing qemu2 VM for "multinode-969000" ...
	I0318 04:30:54.103287   16854 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/multinode-969000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/multinode-969000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/multinode-969000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:33:ef:22:a5:cb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/multinode-969000/disk.qcow2
	I0318 04:30:54.105446   16854 main.go:141] libmachine: STDOUT: 
	I0318 04:30:54.105471   16854 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:30:54.105503   16854 fix.go:56] duration metric: took 14.447667ms for fixHost
	I0318 04:30:54.105510   16854 start.go:83] releasing machines lock for "multinode-969000", held for 14.467125ms
	W0318 04:30:54.105518   16854 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:30:54.105561   16854 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:30:54.105567   16854 start.go:728] Will try again in 5 seconds ...
	I0318 04:30:59.107573   16854 start.go:360] acquireMachinesLock for multinode-969000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:30:59.107947   16854 start.go:364] duration metric: took 279.167µs to acquireMachinesLock for "multinode-969000"
	I0318 04:30:59.108070   16854 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:30:59.108092   16854 fix.go:54] fixHost starting: 
	I0318 04:30:59.108776   16854 fix.go:112] recreateIfNeeded on multinode-969000: state=Stopped err=<nil>
	W0318 04:30:59.108802   16854 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:30:59.119157   16854 out.go:177] * Restarting existing qemu2 VM for "multinode-969000" ...
	I0318 04:30:59.125356   16854 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/multinode-969000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/multinode-969000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/multinode-969000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:33:ef:22:a5:cb -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/multinode-969000/disk.qcow2
	I0318 04:30:59.135359   16854 main.go:141] libmachine: STDOUT: 
	I0318 04:30:59.135437   16854 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:30:59.135515   16854 fix.go:56] duration metric: took 27.42625ms for fixHost
	I0318 04:30:59.135546   16854 start.go:83] releasing machines lock for "multinode-969000", held for 27.577667ms
	W0318 04:30:59.135763   16854 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-969000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-969000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:30:59.144301   16854 out.go:177] 
	W0318 04:30:59.147195   16854 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:30:59.147272   16854 out.go:239] * 
	* 
	W0318 04:30:59.150075   16854 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:30:59.158114   16854 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-969000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000: exit status 7 (71.02675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.28s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-969000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-969000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-969000-m01 --driver=qemu2 : exit status 80 (10.017295208s)

                                                
                                                
-- stdout --
	* [multinode-969000-m01] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-969000-m01" primary control-plane node in "multinode-969000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-969000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-969000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-969000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-969000-m02 --driver=qemu2 : exit status 80 (9.9464715s)

                                                
                                                
-- stdout --
	* [multinode-969000-m02] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-969000-m02" primary control-plane node in "multinode-969000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-969000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-969000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-969000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-969000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-969000: exit status 83 (83.638291ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-969000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-969000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-969000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-969000 -n multinode-969000: exit status 7 (32.175792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-969000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.22s)

                                                
                                    
x
+
TestPreload (10.07s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-447000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-447000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.89582625s)

                                                
                                                
-- stdout --
	* [test-preload-447000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-447000" primary control-plane node in "test-preload-447000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-447000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:31:19.643405   16915 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:31:19.643542   16915 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:31:19.643546   16915 out.go:304] Setting ErrFile to fd 2...
	I0318 04:31:19.643548   16915 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:31:19.643683   16915 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:31:19.644754   16915 out.go:298] Setting JSON to false
	I0318 04:31:19.660719   16915 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":9052,"bootTime":1710752427,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:31:19.660786   16915 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:31:19.666392   16915 out.go:177] * [test-preload-447000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:31:19.679305   16915 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 04:31:19.674313   16915 notify.go:220] Checking for updates...
	I0318 04:31:19.685251   16915 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:31:19.688299   16915 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:31:19.692260   16915 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:31:19.695435   16915 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	I0318 04:31:19.698269   16915 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:31:19.701780   16915 config.go:182] Loaded profile config "multinode-969000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:31:19.701835   16915 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:31:19.706315   16915 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 04:31:19.713307   16915 start.go:297] selected driver: qemu2
	I0318 04:31:19.713312   16915 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:31:19.713317   16915 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:31:19.715751   16915 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:31:19.720280   16915 out.go:177] * Automatically selected the socket_vmnet network
	I0318 04:31:19.724424   16915 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:31:19.724470   16915 cni.go:84] Creating CNI manager for ""
	I0318 04:31:19.724478   16915 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:31:19.724488   16915 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 04:31:19.724538   16915 start.go:340] cluster config:
	{Name:test-preload-447000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-447000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:31:19.729509   16915 iso.go:125] acquiring lock: {Name:mkb8143674083e0c7a46a3ed751b3800392bcd24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:31:19.738327   16915 out.go:177] * Starting "test-preload-447000" primary control-plane node in "test-preload-447000" cluster
	I0318 04:31:19.741274   16915 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0318 04:31:19.741355   16915 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/test-preload-447000/config.json ...
	I0318 04:31:19.741371   16915 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/test-preload-447000/config.json: {Name:mk35e367595c7be1486afa244c9589d4379a9512 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:31:19.741385   16915 cache.go:107] acquiring lock: {Name:mk368de4369b4269f4f86d0406c895e179ee8d50 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:31:19.741413   16915 cache.go:107] acquiring lock: {Name:mkc96c1480c5a2d914c1c49e292170f8344a2688 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:31:19.741431   16915 cache.go:107] acquiring lock: {Name:mkf7b7fb61123462cfdc3712a5f7461103dbb5ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:31:19.741416   16915 cache.go:107] acquiring lock: {Name:mk38f29fc3fa1019ecc7c0e492b5fc094edf4558 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:31:19.741650   16915 cache.go:107] acquiring lock: {Name:mk25e0a3aa1e25e23d4f79dfcc11bfb8c4044a19 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:31:19.741683   16915 cache.go:107] acquiring lock: {Name:mk98038f77bfef2a9096f12134262c39d8954fae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:31:19.741678   16915 cache.go:107] acquiring lock: {Name:mk94236b195d5009f6d918343080f8fc544c9cc8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:31:19.741713   16915 cache.go:107] acquiring lock: {Name:mk4f54de42c37f65a36d26926076c6fcf580f9f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:31:19.741799   16915 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0318 04:31:19.741807   16915 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0318 04:31:19.741855   16915 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 04:31:19.741922   16915 start.go:360] acquireMachinesLock for test-preload-447000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:31:19.741958   16915 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0318 04:31:19.741972   16915 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0318 04:31:19.741975   16915 start.go:364] duration metric: took 38.75µs to acquireMachinesLock for "test-preload-447000"
	I0318 04:31:19.741977   16915 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0318 04:31:19.742027   16915 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0318 04:31:19.742007   16915 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 04:31:19.741996   16915 start.go:93] Provisioning new machine with config: &{Name:test-preload-447000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-447000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:31:19.742152   16915 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:31:19.746153   16915 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 04:31:19.755544   16915 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0318 04:31:19.755629   16915 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0318 04:31:19.756221   16915 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0318 04:31:19.761049   16915 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 04:31:19.761205   16915 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0318 04:31:19.761228   16915 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0318 04:31:19.761265   16915 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0318 04:31:19.761321   16915 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 04:31:19.765303   16915 start.go:159] libmachine.API.Create for "test-preload-447000" (driver="qemu2")
	I0318 04:31:19.765323   16915 client.go:168] LocalClient.Create starting
	I0318 04:31:19.765397   16915 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem
	I0318 04:31:19.765429   16915 main.go:141] libmachine: Decoding PEM data...
	I0318 04:31:19.765442   16915 main.go:141] libmachine: Parsing certificate...
	I0318 04:31:19.765490   16915 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem
	I0318 04:31:19.765514   16915 main.go:141] libmachine: Decoding PEM data...
	I0318 04:31:19.765522   16915 main.go:141] libmachine: Parsing certificate...
	I0318 04:31:19.765910   16915 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18429-15072/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:31:19.910556   16915 main.go:141] libmachine: Creating SSH key...
	I0318 04:31:19.990486   16915 main.go:141] libmachine: Creating Disk image...
	I0318 04:31:19.990511   16915 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:31:19.990676   16915 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/test-preload-447000/disk.qcow2.raw /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/test-preload-447000/disk.qcow2
	I0318 04:31:20.003714   16915 main.go:141] libmachine: STDOUT: 
	I0318 04:31:20.003739   16915 main.go:141] libmachine: STDERR: 
	I0318 04:31:20.003787   16915 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/test-preload-447000/disk.qcow2 +20000M
	I0318 04:31:20.015469   16915 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:31:20.015499   16915 main.go:141] libmachine: STDERR: 
	I0318 04:31:20.015518   16915 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/test-preload-447000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/test-preload-447000/disk.qcow2
	I0318 04:31:20.015530   16915 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:31:20.015569   16915 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/test-preload-447000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/test-preload-447000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/test-preload-447000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:a2:99:27:61:62 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/test-preload-447000/disk.qcow2
	I0318 04:31:20.017766   16915 main.go:141] libmachine: STDOUT: 
	I0318 04:31:20.017785   16915 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:31:20.017803   16915 client.go:171] duration metric: took 252.484208ms to LocalClient.Create
	I0318 04:31:21.711557   16915 cache.go:162] opening:  /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0318 04:31:21.758833   16915 cache.go:162] opening:  /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0318 04:31:21.802637   16915 cache.go:162] opening:  /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0318 04:31:21.814768   16915 cache.go:162] opening:  /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	W0318 04:31:21.815362   16915 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0318 04:31:21.815457   16915 cache.go:162] opening:  /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0318 04:31:21.825990   16915 cache.go:162] opening:  /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I0318 04:31:21.831202   16915 cache.go:162] opening:  /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0318 04:31:21.920257   16915 cache.go:157] /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0318 04:31:21.920304   16915 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 2.178973667s
	I0318 04:31:21.920345   16915 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I0318 04:31:22.018160   16915 start.go:128] duration metric: took 2.276067834s to createHost
	I0318 04:31:22.018206   16915 start.go:83] releasing machines lock for "test-preload-447000", held for 2.276296708s
	W0318 04:31:22.018256   16915 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:31:22.036255   16915 out.go:177] * Deleting "test-preload-447000" in qemu2 ...
	W0318 04:31:22.064153   16915 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:31:22.064204   16915 start.go:728] Will try again in 5 seconds ...
	W0318 04:31:22.388827   16915 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0318 04:31:22.388950   16915 cache.go:162] opening:  /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0318 04:31:23.105332   16915 cache.go:157] /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0318 04:31:23.105380   16915 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 3.363897709s
	I0318 04:31:23.105407   16915 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0318 04:31:24.026858   16915 cache.go:157] /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0318 04:31:24.026906   16915 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 4.285655792s
	I0318 04:31:24.026935   16915 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0318 04:31:24.189484   16915 cache.go:157] /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0318 04:31:24.189551   16915 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 4.448308334s
	I0318 04:31:24.189578   16915 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0318 04:31:25.567062   16915 cache.go:157] /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0318 04:31:25.567138   16915 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 5.825609708s
	I0318 04:31:25.567191   16915 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0318 04:31:25.974608   16915 cache.go:157] /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0318 04:31:25.974658   16915 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.233227083s
	I0318 04:31:25.974703   16915 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0318 04:31:27.064319   16915 start.go:360] acquireMachinesLock for test-preload-447000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:31:27.064748   16915 start.go:364] duration metric: took 350.917µs to acquireMachinesLock for "test-preload-447000"
	I0318 04:31:27.064908   16915 start.go:93] Provisioning new machine with config: &{Name:test-preload-447000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-447000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:31:27.065150   16915 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:31:27.076849   16915 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 04:31:27.126042   16915 start.go:159] libmachine.API.Create for "test-preload-447000" (driver="qemu2")
	I0318 04:31:27.126094   16915 client.go:168] LocalClient.Create starting
	I0318 04:31:27.126201   16915 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem
	I0318 04:31:27.126266   16915 main.go:141] libmachine: Decoding PEM data...
	I0318 04:31:27.126286   16915 main.go:141] libmachine: Parsing certificate...
	I0318 04:31:27.126351   16915 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem
	I0318 04:31:27.126392   16915 main.go:141] libmachine: Decoding PEM data...
	I0318 04:31:27.126406   16915 main.go:141] libmachine: Parsing certificate...
	I0318 04:31:27.126896   16915 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18429-15072/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:31:27.291250   16915 main.go:141] libmachine: Creating SSH key...
	I0318 04:31:27.434635   16915 main.go:141] libmachine: Creating Disk image...
	I0318 04:31:27.434641   16915 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:31:27.434828   16915 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/test-preload-447000/disk.qcow2.raw /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/test-preload-447000/disk.qcow2
	I0318 04:31:27.447671   16915 main.go:141] libmachine: STDOUT: 
	I0318 04:31:27.447693   16915 main.go:141] libmachine: STDERR: 
	I0318 04:31:27.447767   16915 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/test-preload-447000/disk.qcow2 +20000M
	I0318 04:31:27.458828   16915 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:31:27.458847   16915 main.go:141] libmachine: STDERR: 
	I0318 04:31:27.458861   16915 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/test-preload-447000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/test-preload-447000/disk.qcow2
	I0318 04:31:27.458868   16915 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:31:27.458913   16915 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/test-preload-447000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/test-preload-447000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/test-preload-447000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:a0:20:ac:f5:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/test-preload-447000/disk.qcow2
	I0318 04:31:27.460832   16915 main.go:141] libmachine: STDOUT: 
	I0318 04:31:27.460850   16915 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:31:27.460864   16915 client.go:171] duration metric: took 334.776917ms to LocalClient.Create
	I0318 04:31:27.691037   16915 cache.go:157] /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0318 04:31:27.691078   16915 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 7.94991s
	I0318 04:31:27.691121   16915 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0318 04:31:29.461123   16915 start.go:128] duration metric: took 2.395997417s to createHost
	I0318 04:31:29.461185   16915 start.go:83] releasing machines lock for "test-preload-447000", held for 2.396494583s
	W0318 04:31:29.461447   16915 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-447000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-447000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:31:29.474957   16915 out.go:177] 
	W0318 04:31:29.481000   16915 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:31:29.481035   16915 out.go:239] * 
	* 
	W0318 04:31:29.483397   16915 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:31:29.490838   16915 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-447000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-03-18 04:31:29.510573 -0700 PDT m=+770.389412084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-447000 -n test-preload-447000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-447000 -n test-preload-447000: exit status 7 (68.232083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-447000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-447000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-447000
--- FAIL: TestPreload (10.07s)

                                                
                                    
x
+
TestScheduledStopUnix (10.02s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-985000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-985000 --memory=2048 --driver=qemu2 : exit status 80 (9.844961417s)

                                                
                                                
-- stdout --
	* [scheduled-stop-985000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-985000" primary control-plane node in "scheduled-stop-985000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-985000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-985000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-985000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-985000" primary control-plane node in "scheduled-stop-985000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-985000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-985000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-03-18 04:31:39.528272 -0700 PDT m=+780.407445542
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-985000 -n scheduled-stop-985000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-985000 -n scheduled-stop-985000: exit status 7 (69.255125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-985000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-985000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-985000
--- FAIL: TestScheduledStopUnix (10.02s)

                                                
                                    
x
+
TestSkaffold (16.51s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe2226560532 version
skaffold_test.go:63: skaffold version: v2.10.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-729000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-729000 --memory=2600 --driver=qemu2 : exit status 80 (9.798489167s)

                                                
                                                
-- stdout --
	* [skaffold-729000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-729000" primary control-plane node in "skaffold-729000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-729000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-729000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-729000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-729000" primary control-plane node in "skaffold-729000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-729000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-729000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-03-18 04:31:56.042077 -0700 PDT m=+796.921801959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-729000 -n skaffold-729000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-729000 -n skaffold-729000: exit status 7 (63.890708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-729000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-729000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-729000
--- FAIL: TestSkaffold (16.51s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (633.31s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1695679451 start -p running-upgrade-738000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.1695679451 start -p running-upgrade-738000 --memory=2200 --vm-driver=qemu2 : (1m20.364520833s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-738000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-738000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m33.451222542s)

                                                
                                                
-- stdout --
	* [running-upgrade-738000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-738000" primary control-plane node in "running-upgrade-738000" cluster
	* Updating the running qemu2 "running-upgrade-738000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:34:02.188301   17322 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:34:02.188437   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:34:02.188441   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:34:02.188443   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:34:02.188579   17322 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:34:02.189589   17322 out.go:298] Setting JSON to false
	I0318 04:34:02.207243   17322 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":9215,"bootTime":1710752427,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:34:02.207311   17322 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:34:02.217705   17322 out.go:177] * [running-upgrade-738000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:34:02.221717   17322 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 04:34:02.225727   17322 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:34:02.221755   17322 notify.go:220] Checking for updates...
	I0318 04:34:02.231654   17322 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:34:02.234720   17322 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:34:02.237790   17322 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	I0318 04:34:02.240717   17322 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:34:02.244118   17322 config.go:182] Loaded profile config "running-upgrade-738000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 04:34:02.247721   17322 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0318 04:34:02.249231   17322 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:34:02.252769   17322 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 04:34:02.259721   17322 start.go:297] selected driver: qemu2
	I0318 04:34:02.259727   17322 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-738000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53312 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-738000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0318 04:34:02.259790   17322 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:34:02.262491   17322 cni.go:84] Creating CNI manager for ""
	I0318 04:34:02.262508   17322 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:34:02.262540   17322 start.go:340] cluster config:
	{Name:running-upgrade-738000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53312 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-738000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0318 04:34:02.262590   17322 iso.go:125] acquiring lock: {Name:mkb8143674083e0c7a46a3ed751b3800392bcd24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:34:02.270737   17322 out.go:177] * Starting "running-upgrade-738000" primary control-plane node in "running-upgrade-738000" cluster
	I0318 04:34:02.274752   17322 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0318 04:34:02.274772   17322 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0318 04:34:02.274784   17322 cache.go:56] Caching tarball of preloaded images
	I0318 04:34:02.274830   17322 preload.go:173] Found /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:34:02.274835   17322 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0318 04:34:02.274903   17322 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/running-upgrade-738000/config.json ...
	I0318 04:34:02.275361   17322 start.go:360] acquireMachinesLock for running-upgrade-738000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:34:02.275385   17322 start.go:364] duration metric: took 19.042µs to acquireMachinesLock for "running-upgrade-738000"
	I0318 04:34:02.275393   17322 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:34:02.275397   17322 fix.go:54] fixHost starting: 
	I0318 04:34:02.276054   17322 fix.go:112] recreateIfNeeded on running-upgrade-738000: state=Running err=<nil>
	W0318 04:34:02.276065   17322 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:34:02.279661   17322 out.go:177] * Updating the running qemu2 "running-upgrade-738000" VM ...
	I0318 04:34:02.287722   17322 machine.go:94] provisionDockerMachine start ...
	I0318 04:34:02.287770   17322 main.go:141] libmachine: Using SSH client type: native
	I0318 04:34:02.287901   17322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028c1bf0] 0x1028c4450 <nil>  [] 0s} localhost 53280 <nil> <nil>}
	I0318 04:34:02.287906   17322 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 04:34:02.356257   17322 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-738000
	
	I0318 04:34:02.356269   17322 buildroot.go:166] provisioning hostname "running-upgrade-738000"
	I0318 04:34:02.356319   17322 main.go:141] libmachine: Using SSH client type: native
	I0318 04:34:02.356419   17322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028c1bf0] 0x1028c4450 <nil>  [] 0s} localhost 53280 <nil> <nil>}
	I0318 04:34:02.356424   17322 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-738000 && echo "running-upgrade-738000" | sudo tee /etc/hostname
	I0318 04:34:02.427844   17322 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-738000
	
	I0318 04:34:02.427895   17322 main.go:141] libmachine: Using SSH client type: native
	I0318 04:34:02.427985   17322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028c1bf0] 0x1028c4450 <nil>  [] 0s} localhost 53280 <nil> <nil>}
	I0318 04:34:02.427993   17322 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-738000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-738000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-738000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 04:34:02.495091   17322 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 04:34:02.495102   17322 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18429-15072/.minikube CaCertPath:/Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18429-15072/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18429-15072/.minikube}
	I0318 04:34:02.495110   17322 buildroot.go:174] setting up certificates
	I0318 04:34:02.495118   17322 provision.go:84] configureAuth start
	I0318 04:34:02.495122   17322 provision.go:143] copyHostCerts
	I0318 04:34:02.495185   17322 exec_runner.go:144] found /Users/jenkins/minikube-integration/18429-15072/.minikube/key.pem, removing ...
	I0318 04:34:02.495191   17322 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18429-15072/.minikube/key.pem
	I0318 04:34:02.495308   17322 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18429-15072/.minikube/key.pem (1679 bytes)
	I0318 04:34:02.495479   17322 exec_runner.go:144] found /Users/jenkins/minikube-integration/18429-15072/.minikube/ca.pem, removing ...
	I0318 04:34:02.495483   17322 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18429-15072/.minikube/ca.pem
	I0318 04:34:02.495538   17322 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18429-15072/.minikube/ca.pem (1082 bytes)
	I0318 04:34:02.495647   17322 exec_runner.go:144] found /Users/jenkins/minikube-integration/18429-15072/.minikube/cert.pem, removing ...
	I0318 04:34:02.495650   17322 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18429-15072/.minikube/cert.pem
	I0318 04:34:02.495687   17322 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18429-15072/.minikube/cert.pem (1123 bytes)
	I0318 04:34:02.495767   17322 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-738000 san=[127.0.0.1 localhost minikube running-upgrade-738000]
	I0318 04:34:02.576042   17322 provision.go:177] copyRemoteCerts
	I0318 04:34:02.576075   17322 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 04:34:02.576083   17322 sshutil.go:53] new ssh client: &{IP:localhost Port:53280 SSHKeyPath:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/running-upgrade-738000/id_rsa Username:docker}
	I0318 04:34:02.612876   17322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0318 04:34:02.619686   17322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0318 04:34:02.626023   17322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 04:34:02.633069   17322 provision.go:87] duration metric: took 137.947041ms to configureAuth
	I0318 04:34:02.633078   17322 buildroot.go:189] setting minikube options for container-runtime
	I0318 04:34:02.633188   17322 config.go:182] Loaded profile config "running-upgrade-738000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 04:34:02.633223   17322 main.go:141] libmachine: Using SSH client type: native
	I0318 04:34:02.633313   17322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028c1bf0] 0x1028c4450 <nil>  [] 0s} localhost 53280 <nil> <nil>}
	I0318 04:34:02.633320   17322 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0318 04:34:02.701185   17322 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0318 04:34:02.701196   17322 buildroot.go:70] root file system type: tmpfs
	I0318 04:34:02.701240   17322 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0318 04:34:02.701292   17322 main.go:141] libmachine: Using SSH client type: native
	I0318 04:34:02.701401   17322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028c1bf0] 0x1028c4450 <nil>  [] 0s} localhost 53280 <nil> <nil>}
	I0318 04:34:02.701434   17322 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0318 04:34:02.773980   17322 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0318 04:34:02.774033   17322 main.go:141] libmachine: Using SSH client type: native
	I0318 04:34:02.774135   17322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028c1bf0] 0x1028c4450 <nil>  [] 0s} localhost 53280 <nil> <nil>}
	I0318 04:34:02.774144   17322 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0318 04:34:02.842878   17322 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 04:34:02.842889   17322 machine.go:97] duration metric: took 555.179167ms to provisionDockerMachine
	I0318 04:34:02.842895   17322 start.go:293] postStartSetup for "running-upgrade-738000" (driver="qemu2")
	I0318 04:34:02.842901   17322 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 04:34:02.842952   17322 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 04:34:02.842961   17322 sshutil.go:53] new ssh client: &{IP:localhost Port:53280 SSHKeyPath:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/running-upgrade-738000/id_rsa Username:docker}
	I0318 04:34:02.882867   17322 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 04:34:02.884275   17322 info.go:137] Remote host: Buildroot 2021.02.12
	I0318 04:34:02.884282   17322 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18429-15072/.minikube/addons for local assets ...
	I0318 04:34:02.884343   17322 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18429-15072/.minikube/files for local assets ...
	I0318 04:34:02.884437   17322 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18429-15072/.minikube/files/etc/ssl/certs/154812.pem -> 154812.pem in /etc/ssl/certs
	I0318 04:34:02.884527   17322 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 04:34:02.887139   17322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/files/etc/ssl/certs/154812.pem --> /etc/ssl/certs/154812.pem (1708 bytes)
	I0318 04:34:02.894287   17322 start.go:296] duration metric: took 51.389292ms for postStartSetup
	I0318 04:34:02.894300   17322 fix.go:56] duration metric: took 618.923583ms for fixHost
	I0318 04:34:02.894327   17322 main.go:141] libmachine: Using SSH client type: native
	I0318 04:34:02.894430   17322 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1028c1bf0] 0x1028c4450 <nil>  [] 0s} localhost 53280 <nil> <nil>}
	I0318 04:34:02.894437   17322 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0318 04:34:02.960672   17322 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710761643.374608182
	
	I0318 04:34:02.960679   17322 fix.go:216] guest clock: 1710761643.374608182
	I0318 04:34:02.960683   17322 fix.go:229] Guest: 2024-03-18 04:34:03.374608182 -0700 PDT Remote: 2024-03-18 04:34:02.894301 -0700 PDT m=+0.728800543 (delta=480.307182ms)
	I0318 04:34:02.960693   17322 fix.go:200] guest clock delta is within tolerance: 480.307182ms
	I0318 04:34:02.960696   17322 start.go:83] releasing machines lock for "running-upgrade-738000", held for 685.329834ms
	I0318 04:34:02.960751   17322 ssh_runner.go:195] Run: cat /version.json
	I0318 04:34:02.960752   17322 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 04:34:02.960760   17322 sshutil.go:53] new ssh client: &{IP:localhost Port:53280 SSHKeyPath:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/running-upgrade-738000/id_rsa Username:docker}
	I0318 04:34:02.960772   17322 sshutil.go:53] new ssh client: &{IP:localhost Port:53280 SSHKeyPath:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/running-upgrade-738000/id_rsa Username:docker}
	W0318 04:34:02.961284   17322 sshutil.go:64] dial failure (will retry): dial tcp [::1]:53280: connect: connection refused
	I0318 04:34:02.961302   17322 retry.go:31] will retry after 259.590465ms: dial tcp [::1]:53280: connect: connection refused
	W0318 04:34:03.261487   17322 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0318 04:34:03.261570   17322 ssh_runner.go:195] Run: systemctl --version
	I0318 04:34:03.263754   17322 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 04:34:03.265427   17322 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 04:34:03.265456   17322 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0318 04:34:03.268278   17322 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0318 04:34:03.272431   17322 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 04:34:03.272438   17322 start.go:494] detecting cgroup driver to use...
	I0318 04:34:03.272554   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 04:34:03.277692   17322 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0318 04:34:03.281313   17322 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0318 04:34:03.284043   17322 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0318 04:34:03.284070   17322 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0318 04:34:03.286830   17322 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 04:34:03.289812   17322 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0318 04:34:03.292534   17322 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 04:34:03.295428   17322 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 04:34:03.298783   17322 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0318 04:34:03.301725   17322 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 04:34:03.304407   17322 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 04:34:03.307110   17322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 04:34:03.397266   17322 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0318 04:34:03.403740   17322 start.go:494] detecting cgroup driver to use...
	I0318 04:34:03.403845   17322 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0318 04:34:03.413026   17322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 04:34:03.418148   17322 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 04:34:03.425845   17322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 04:34:03.429999   17322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 04:34:03.434739   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 04:34:03.440257   17322 ssh_runner.go:195] Run: which cri-dockerd
	I0318 04:34:03.441560   17322 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0318 04:34:03.444151   17322 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0318 04:34:03.449963   17322 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0318 04:34:03.527003   17322 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0318 04:34:03.604340   17322 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0318 04:34:03.604411   17322 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0318 04:34:03.609875   17322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 04:34:03.689382   17322 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 04:34:07.167182   17322 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.477899875s)
	I0318 04:34:07.167252   17322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0318 04:34:07.172459   17322 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0318 04:34:07.179683   17322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 04:34:07.184556   17322 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0318 04:34:07.274559   17322 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0318 04:34:07.356477   17322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 04:34:07.418718   17322 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0318 04:34:07.425776   17322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 04:34:07.430230   17322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 04:34:07.495968   17322 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0318 04:34:07.533663   17322 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0318 04:34:07.533736   17322 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0318 04:34:07.536240   17322 start.go:562] Will wait 60s for crictl version
	I0318 04:34:07.536289   17322 ssh_runner.go:195] Run: which crictl
	I0318 04:34:07.537677   17322 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 04:34:07.549353   17322 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0318 04:34:07.549425   17322 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 04:34:07.562015   17322 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 04:34:07.579773   17322 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0318 04:34:07.579904   17322 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0318 04:34:07.581186   17322 kubeadm.go:877] updating cluster {Name:running-upgrade-738000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53312 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-738000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0318 04:34:07.581231   17322 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0318 04:34:07.581273   17322 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 04:34:07.591879   17322 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0318 04:34:07.591888   17322 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0318 04:34:07.591933   17322 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0318 04:34:07.595043   17322 ssh_runner.go:195] Run: which lz4
	I0318 04:34:07.596328   17322 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0318 04:34:07.597549   17322 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 04:34:07.597559   17322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0318 04:34:08.319537   17322 docker.go:649] duration metric: took 723.259333ms to copy over tarball
	I0318 04:34:08.319595   17322 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 04:34:09.488425   17322 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.168855208s)
	I0318 04:34:09.488443   17322 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 04:34:09.504396   17322 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0318 04:34:09.507389   17322 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0318 04:34:09.512190   17322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 04:34:09.576279   17322 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 04:34:10.967385   17322 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.391136333s)
	I0318 04:34:10.967474   17322 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 04:34:10.986839   17322 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0318 04:34:10.986861   17322 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0318 04:34:10.986866   17322 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 04:34:10.993530   17322 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0318 04:34:10.993530   17322 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0318 04:34:10.993582   17322 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 04:34:10.993683   17322 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0318 04:34:10.993732   17322 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 04:34:10.993839   17322 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0318 04:34:10.993857   17322 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0318 04:34:10.993895   17322 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 04:34:11.003633   17322 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 04:34:11.003747   17322 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0318 04:34:11.003817   17322 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0318 04:34:11.004171   17322 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0318 04:34:11.004217   17322 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0318 04:34:11.004228   17322 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 04:34:11.004369   17322 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0318 04:34:11.004615   17322 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	W0318 04:34:12.903665   17322 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0318 04:34:12.904006   17322 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0318 04:34:12.943556   17322 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0318 04:34:12.943586   17322 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 04:34:12.943644   17322 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0318 04:34:12.957916   17322 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0318 04:34:12.958026   17322 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0318 04:34:12.960694   17322 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0318 04:34:12.960711   17322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0318 04:34:12.973835   17322 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0318 04:34:12.998907   17322 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0318 04:34:12.998919   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0318 04:34:13.001296   17322 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0318 04:34:13.001316   17322 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0318 04:34:13.001373   17322 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0318 04:34:13.026750   17322 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0318 04:34:13.039920   17322 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0318 04:34:13.042091   17322 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 04:34:13.049268   17322 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0318 04:34:13.049845   17322 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0318 04:34:13.049878   17322 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0318 04:34:13.049879   17322 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0318 04:34:13.049900   17322 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0318 04:34:13.049930   17322 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0318 04:34:13.050824   17322 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0318 04:34:13.053776   17322 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0318 04:34:13.053793   17322 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0318 04:34:13.053842   17322 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0318 04:34:13.063420   17322 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0318 04:34:13.063440   17322 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 04:34:13.063501   17322 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 04:34:13.074976   17322 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0318 04:34:13.074997   17322 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0318 04:34:13.075056   17322 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0318 04:34:13.075083   17322 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0318 04:34:13.075126   17322 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0318 04:34:13.075135   17322 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0318 04:34:13.075153   17322 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0318 04:34:13.095129   17322 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0318 04:34:13.099591   17322 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0318 04:34:13.099624   17322 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0318 04:34:13.099644   17322 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0318 04:34:13.099712   17322 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0318 04:34:13.101230   17322 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0318 04:34:13.101239   17322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0318 04:34:13.108436   17322 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0318 04:34:13.108445   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0318 04:34:13.136939   17322 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W0318 04:34:13.578528   17322 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0318 04:34:13.579074   17322 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 04:34:13.618737   17322 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0318 04:34:13.618784   17322 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 04:34:13.618899   17322 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 04:34:14.897766   17322 ssh_runner.go:235] Completed: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.278877375s)
	I0318 04:34:14.897804   17322 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0318 04:34:14.898151   17322 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0318 04:34:14.903298   17322 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0318 04:34:14.903331   17322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0318 04:34:14.955776   17322 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0318 04:34:14.955790   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0318 04:34:15.197874   17322 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0318 04:34:15.197912   17322 cache_images.go:92] duration metric: took 4.211180208s to LoadCachedImages
	W0318 04:34:15.197945   17322 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1: no such file or directory
	I0318 04:34:15.197952   17322 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0318 04:34:15.198011   17322 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-738000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-738000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 04:34:15.198070   17322 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0318 04:34:15.211220   17322 cni.go:84] Creating CNI manager for ""
	I0318 04:34:15.211231   17322 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:34:15.211236   17322 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 04:34:15.211244   17322 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-738000 NodeName:running-upgrade-738000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 04:34:15.211324   17322 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-738000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 04:34:15.211387   17322 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0318 04:34:15.214769   17322 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 04:34:15.214798   17322 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 04:34:15.217444   17322 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0318 04:34:15.222532   17322 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 04:34:15.227285   17322 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0318 04:34:15.232829   17322 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0318 04:34:15.234126   17322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 04:34:15.314912   17322 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 04:34:15.320345   17322 certs.go:68] Setting up /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/running-upgrade-738000 for IP: 10.0.2.15
	I0318 04:34:15.320351   17322 certs.go:194] generating shared ca certs ...
	I0318 04:34:15.320359   17322 certs.go:226] acquiring lock for ca certs: {Name:mk30e64e6a2f5ccd376efb026974022e10fa3463 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:34:15.320583   17322 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18429-15072/.minikube/ca.key
	I0318 04:34:15.320633   17322 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18429-15072/.minikube/proxy-client-ca.key
	I0318 04:34:15.320638   17322 certs.go:256] generating profile certs ...
	I0318 04:34:15.320698   17322 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/running-upgrade-738000/client.key
	I0318 04:34:15.320715   17322 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/running-upgrade-738000/apiserver.key.a09f26a5
	I0318 04:34:15.320728   17322 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/running-upgrade-738000/apiserver.crt.a09f26a5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0318 04:34:15.382564   17322 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/running-upgrade-738000/apiserver.crt.a09f26a5 ...
	I0318 04:34:15.382568   17322 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/running-upgrade-738000/apiserver.crt.a09f26a5: {Name:mkf442238c29a653f5613f53215d1918eacc7fdc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:34:15.382789   17322 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/running-upgrade-738000/apiserver.key.a09f26a5 ...
	I0318 04:34:15.382797   17322 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/running-upgrade-738000/apiserver.key.a09f26a5: {Name:mk160880fdc2a5ce3a6744c8fe5a75d8efe36fa7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:34:15.382910   17322 certs.go:381] copying /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/running-upgrade-738000/apiserver.crt.a09f26a5 -> /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/running-upgrade-738000/apiserver.crt
	I0318 04:34:15.383113   17322 certs.go:385] copying /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/running-upgrade-738000/apiserver.key.a09f26a5 -> /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/running-upgrade-738000/apiserver.key
	I0318 04:34:15.383249   17322 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/running-upgrade-738000/proxy-client.key
	I0318 04:34:15.383360   17322 certs.go:484] found cert: /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/15481.pem (1338 bytes)
	W0318 04:34:15.383381   17322 certs.go:480] ignoring /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/15481_empty.pem, impossibly tiny 0 bytes
	I0318 04:34:15.383386   17322 certs.go:484] found cert: /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca-key.pem (1679 bytes)
	I0318 04:34:15.383402   17322 certs.go:484] found cert: /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem (1082 bytes)
	I0318 04:34:15.383419   17322 certs.go:484] found cert: /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem (1123 bytes)
	I0318 04:34:15.383437   17322 certs.go:484] found cert: /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/key.pem (1679 bytes)
	I0318 04:34:15.383475   17322 certs.go:484] found cert: /Users/jenkins/minikube-integration/18429-15072/.minikube/files/etc/ssl/certs/154812.pem (1708 bytes)
	I0318 04:34:15.383783   17322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 04:34:15.391107   17322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0318 04:34:15.398727   17322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 04:34:15.406190   17322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 04:34:15.413289   17322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/running-upgrade-738000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0318 04:34:15.419809   17322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/running-upgrade-738000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 04:34:15.426947   17322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/running-upgrade-738000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 04:34:15.434426   17322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/running-upgrade-738000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 04:34:15.441270   17322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/files/etc/ssl/certs/154812.pem --> /usr/share/ca-certificates/154812.pem (1708 bytes)
	I0318 04:34:15.448185   17322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 04:34:15.455435   17322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/15481.pem --> /usr/share/ca-certificates/15481.pem (1338 bytes)
	I0318 04:34:15.462313   17322 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 04:34:15.467091   17322 ssh_runner.go:195] Run: openssl version
	I0318 04:34:15.468946   17322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15481.pem && ln -fs /usr/share/ca-certificates/15481.pem /etc/ssl/certs/15481.pem"
	I0318 04:34:15.472539   17322 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15481.pem
	I0318 04:34:15.474123   17322 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 11:20 /usr/share/ca-certificates/15481.pem
	I0318 04:34:15.474148   17322 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15481.pem
	I0318 04:34:15.475936   17322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15481.pem /etc/ssl/certs/51391683.0"
	I0318 04:34:15.479194   17322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154812.pem && ln -fs /usr/share/ca-certificates/154812.pem /etc/ssl/certs/154812.pem"
	I0318 04:34:15.482066   17322 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154812.pem
	I0318 04:34:15.483504   17322 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 11:20 /usr/share/ca-certificates/154812.pem
	I0318 04:34:15.483526   17322 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154812.pem
	I0318 04:34:15.485273   17322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154812.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 04:34:15.488549   17322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 04:34:15.492089   17322 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 04:34:15.493548   17322 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 11:33 /usr/share/ca-certificates/minikubeCA.pem
	I0318 04:34:15.493575   17322 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 04:34:15.495463   17322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 04:34:15.498050   17322 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 04:34:15.499580   17322 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 04:34:15.501321   17322 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 04:34:15.503266   17322 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 04:34:15.505147   17322 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 04:34:15.507241   17322 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 04:34:15.509029   17322 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 04:34:15.510925   17322 kubeadm.go:391] StartCluster: {Name:running-upgrade-738000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53312 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-738000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0318 04:34:15.510994   17322 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0318 04:34:15.522098   17322 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 04:34:15.526207   17322 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 04:34:15.526213   17322 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 04:34:15.526216   17322 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 04:34:15.526237   17322 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 04:34:15.529555   17322 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 04:34:15.529587   17322 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-738000" does not appear in /Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:34:15.529601   17322 kubeconfig.go:62] /Users/jenkins/minikube-integration/18429-15072/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-738000" cluster setting kubeconfig missing "running-upgrade-738000" context setting]
	I0318 04:34:15.529805   17322 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18429-15072/kubeconfig: {Name:mkeb86e27ccdf30a065b43661cfe2af2dc198b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:34:15.530462   17322 kapi.go:59] client config for running-upgrade-738000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/running-upgrade-738000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/running-upgrade-738000/client.key", CAFile:"/Users/jenkins/minikube-integration/18429-15072/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103bb2a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 04:34:15.531225   17322 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 04:34:15.533916   17322 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-738000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0318 04:34:15.533921   17322 kubeadm.go:1154] stopping kube-system containers ...
	I0318 04:34:15.533957   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0318 04:34:15.545106   17322 docker.go:483] Stopping containers: [d0b96d59edfd faec7e7579ea 643f2657216e 7c0fc8a96a29 2ed27f531543 eca65001ee00 49493fc46f02 58db8d274817 9cb3697bf097 7147d93a4ffc c52784390773 884b6b371ffa 7aff71f8a140 3ef7acd9b90a]
	I0318 04:34:15.545185   17322 ssh_runner.go:195] Run: docker stop d0b96d59edfd faec7e7579ea 643f2657216e 7c0fc8a96a29 2ed27f531543 eca65001ee00 49493fc46f02 58db8d274817 9cb3697bf097 7147d93a4ffc c52784390773 884b6b371ffa 7aff71f8a140 3ef7acd9b90a
	I0318 04:34:15.556386   17322 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 04:34:15.658285   17322 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 04:34:15.662890   17322 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5643 Mar 18 11:33 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Mar 18 11:33 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Mar 18 11:34 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Mar 18 11:33 /etc/kubernetes/scheduler.conf
	
	I0318 04:34:15.662931   17322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53312 /etc/kubernetes/admin.conf
	I0318 04:34:15.666724   17322 kubeadm.go:162] "https://control-plane.minikube.internal:53312" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53312 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0318 04:34:15.666757   17322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 04:34:15.670143   17322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53312 /etc/kubernetes/kubelet.conf
	I0318 04:34:15.673343   17322 kubeadm.go:162] "https://control-plane.minikube.internal:53312" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53312 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0318 04:34:15.673365   17322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 04:34:15.676283   17322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53312 /etc/kubernetes/controller-manager.conf
	I0318 04:34:15.679467   17322 kubeadm.go:162] "https://control-plane.minikube.internal:53312" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53312 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0318 04:34:15.679485   17322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 04:34:15.682532   17322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53312 /etc/kubernetes/scheduler.conf
	I0318 04:34:15.685000   17322 kubeadm.go:162] "https://control-plane.minikube.internal:53312" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53312 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0318 04:34:15.685022   17322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 04:34:15.687771   17322 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 04:34:15.690742   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 04:34:15.734899   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 04:34:16.377818   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 04:34:16.559728   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 04:34:16.592982   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 04:34:16.618730   17322 api_server.go:52] waiting for apiserver process to appear ...
	I0318 04:34:16.618806   17322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 04:34:17.120995   17322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 04:34:17.621082   17322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 04:34:18.120843   17322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 04:34:18.125433   17322 api_server.go:72] duration metric: took 1.5067535s to wait for apiserver process to appear ...
	I0318 04:34:18.125444   17322 api_server.go:88] waiting for apiserver healthz status ...
	I0318 04:34:18.125471   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:34:23.127392   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:34:23.127429   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:34:28.127692   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:34:28.127776   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:34:33.128797   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:34:33.128894   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:34:38.129935   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:34:38.129984   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:34:43.131029   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:34:43.131119   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:34:48.132863   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:34:48.132950   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:34:53.135214   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:34:53.135309   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:34:58.137839   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:34:58.137925   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:35:03.140375   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:35:03.140465   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:35:08.142987   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:35:08.143032   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:35:13.143428   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:35:13.143512   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:35:18.146051   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:35:18.146690   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:35:18.192459   17322 logs.go:276] 2 containers: [8cbe0799ab57 7147d93a4ffc]
	I0318 04:35:18.192613   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:35:18.212151   17322 logs.go:276] 2 containers: [3597b574be66 c52784390773]
	I0318 04:35:18.212258   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:35:18.228614   17322 logs.go:276] 1 containers: [9445dc83224c]
	I0318 04:35:18.228693   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:35:18.239421   17322 logs.go:276] 2 containers: [083e5435c9c9 2ed27f531543]
	I0318 04:35:18.239484   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:35:18.250048   17322 logs.go:276] 1 containers: [313af99b8193]
	I0318 04:35:18.250118   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:35:18.263685   17322 logs.go:276] 2 containers: [c9b5a8296878 eca65001ee00]
	I0318 04:35:18.263756   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:35:18.273784   17322 logs.go:276] 0 containers: []
	W0318 04:35:18.273796   17322 logs.go:278] No container was found matching "kindnet"
	I0318 04:35:18.273847   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:35:18.284020   17322 logs.go:276] 2 containers: [1fb3e049cd41 f4f9d7351a87]
	I0318 04:35:18.284040   17322 logs.go:123] Gathering logs for storage-provisioner [f4f9d7351a87] ...
	I0318 04:35:18.284045   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4f9d7351a87"
	I0318 04:35:18.295339   17322 logs.go:123] Gathering logs for dmesg ...
	I0318 04:35:18.295350   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:35:18.299647   17322 logs.go:123] Gathering logs for kube-apiserver [8cbe0799ab57] ...
	I0318 04:35:18.299656   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cbe0799ab57"
	I0318 04:35:18.315329   17322 logs.go:123] Gathering logs for kube-controller-manager [c9b5a8296878] ...
	I0318 04:35:18.315341   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b5a8296878"
	I0318 04:35:18.333603   17322 logs.go:123] Gathering logs for kube-controller-manager [eca65001ee00] ...
	I0318 04:35:18.333613   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca65001ee00"
	I0318 04:35:18.345124   17322 logs.go:123] Gathering logs for coredns [9445dc83224c] ...
	I0318 04:35:18.345133   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9445dc83224c"
	I0318 04:35:18.355990   17322 logs.go:123] Gathering logs for Docker ...
	I0318 04:35:18.356000   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:35:18.381866   17322 logs.go:123] Gathering logs for container status ...
	I0318 04:35:18.381873   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:35:18.394525   17322 logs.go:123] Gathering logs for kube-apiserver [7147d93a4ffc] ...
	I0318 04:35:18.394538   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7147d93a4ffc"
	I0318 04:35:18.420282   17322 logs.go:123] Gathering logs for kube-scheduler [083e5435c9c9] ...
	I0318 04:35:18.420293   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083e5435c9c9"
	I0318 04:35:18.433447   17322 logs.go:123] Gathering logs for kube-scheduler [2ed27f531543] ...
	I0318 04:35:18.433458   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed27f531543"
	I0318 04:35:18.447629   17322 logs.go:123] Gathering logs for storage-provisioner [1fb3e049cd41] ...
	I0318 04:35:18.447639   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb3e049cd41"
	I0318 04:35:18.463785   17322 logs.go:123] Gathering logs for kube-proxy [313af99b8193] ...
	I0318 04:35:18.463799   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313af99b8193"
	I0318 04:35:18.474998   17322 logs.go:123] Gathering logs for kubelet ...
	I0318 04:35:18.475008   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 04:35:18.511513   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:35:18.511609   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:35:18.513322   17322 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:35:18.513329   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:35:18.582886   17322 logs.go:123] Gathering logs for etcd [3597b574be66] ...
	I0318 04:35:18.582896   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3597b574be66"
	I0318 04:35:18.597476   17322 logs.go:123] Gathering logs for etcd [c52784390773] ...
	I0318 04:35:18.597489   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52784390773"
	I0318 04:35:18.612454   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:35:18.612465   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 04:35:18.612489   17322 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0318 04:35:18.612493   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	  Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:35:18.612496   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	  Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:35:18.612501   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:35:18.612503   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:35:28.615356   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:35:33.617762   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:35:33.618206   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:35:33.655593   17322 logs.go:276] 2 containers: [8cbe0799ab57 7147d93a4ffc]
	I0318 04:35:33.655728   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:35:33.681442   17322 logs.go:276] 2 containers: [3597b574be66 c52784390773]
	I0318 04:35:33.681554   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:35:33.700091   17322 logs.go:276] 1 containers: [9445dc83224c]
	I0318 04:35:33.700172   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:35:33.713078   17322 logs.go:276] 2 containers: [083e5435c9c9 2ed27f531543]
	I0318 04:35:33.713159   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:35:33.723433   17322 logs.go:276] 1 containers: [313af99b8193]
	I0318 04:35:33.723495   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:35:33.733678   17322 logs.go:276] 2 containers: [c9b5a8296878 eca65001ee00]
	I0318 04:35:33.733735   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:35:33.743874   17322 logs.go:276] 0 containers: []
	W0318 04:35:33.743884   17322 logs.go:278] No container was found matching "kindnet"
	I0318 04:35:33.743956   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:35:33.754016   17322 logs.go:276] 2 containers: [1fb3e049cd41 f4f9d7351a87]
	I0318 04:35:33.754033   17322 logs.go:123] Gathering logs for storage-provisioner [f4f9d7351a87] ...
	I0318 04:35:33.754038   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4f9d7351a87"
	I0318 04:35:33.765306   17322 logs.go:123] Gathering logs for Docker ...
	I0318 04:35:33.765321   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:35:33.791328   17322 logs.go:123] Gathering logs for etcd [3597b574be66] ...
	I0318 04:35:33.791339   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3597b574be66"
	I0318 04:35:33.805195   17322 logs.go:123] Gathering logs for etcd [c52784390773] ...
	I0318 04:35:33.805208   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52784390773"
	I0318 04:35:33.819716   17322 logs.go:123] Gathering logs for coredns [9445dc83224c] ...
	I0318 04:35:33.819727   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9445dc83224c"
	I0318 04:35:33.830959   17322 logs.go:123] Gathering logs for storage-provisioner [1fb3e049cd41] ...
	I0318 04:35:33.830970   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb3e049cd41"
	I0318 04:35:33.842665   17322 logs.go:123] Gathering logs for kube-apiserver [8cbe0799ab57] ...
	I0318 04:35:33.842678   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cbe0799ab57"
	I0318 04:35:33.856842   17322 logs.go:123] Gathering logs for kube-apiserver [7147d93a4ffc] ...
	I0318 04:35:33.856854   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7147d93a4ffc"
	I0318 04:35:33.880774   17322 logs.go:123] Gathering logs for kube-proxy [313af99b8193] ...
	I0318 04:35:33.880784   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313af99b8193"
	I0318 04:35:33.892453   17322 logs.go:123] Gathering logs for kube-controller-manager [eca65001ee00] ...
	I0318 04:35:33.892467   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca65001ee00"
	I0318 04:35:33.908406   17322 logs.go:123] Gathering logs for kubelet ...
	I0318 04:35:33.908419   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 04:35:33.945158   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:35:33.945251   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:35:33.946900   17322 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:35:33.946905   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:35:33.982349   17322 logs.go:123] Gathering logs for kube-scheduler [083e5435c9c9] ...
	I0318 04:35:33.982363   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083e5435c9c9"
	I0318 04:35:33.994680   17322 logs.go:123] Gathering logs for kube-controller-manager [c9b5a8296878] ...
	I0318 04:35:33.994693   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b5a8296878"
	I0318 04:35:34.011700   17322 logs.go:123] Gathering logs for dmesg ...
	I0318 04:35:34.011710   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:35:34.016685   17322 logs.go:123] Gathering logs for kube-scheduler [2ed27f531543] ...
	I0318 04:35:34.016691   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed27f531543"
	I0318 04:35:34.030836   17322 logs.go:123] Gathering logs for container status ...
	I0318 04:35:34.030849   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:35:34.042521   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:35:34.042534   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 04:35:34.042557   17322 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0318 04:35:34.042567   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	  Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:35:34.042577   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	  Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:35:34.042583   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:35:34.042586   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:35:44.044829   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:35:49.047461   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:35:49.047951   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:35:49.090781   17322 logs.go:276] 2 containers: [8cbe0799ab57 7147d93a4ffc]
	I0318 04:35:49.090908   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:35:49.122438   17322 logs.go:276] 2 containers: [3597b574be66 c52784390773]
	I0318 04:35:49.122521   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:35:49.136154   17322 logs.go:276] 1 containers: [9445dc83224c]
	I0318 04:35:49.136226   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:35:49.149061   17322 logs.go:276] 2 containers: [083e5435c9c9 2ed27f531543]
	I0318 04:35:49.149126   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:35:49.159615   17322 logs.go:276] 1 containers: [313af99b8193]
	I0318 04:35:49.159674   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:35:49.169949   17322 logs.go:276] 2 containers: [c9b5a8296878 eca65001ee00]
	I0318 04:35:49.170015   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:35:49.180202   17322 logs.go:276] 0 containers: []
	W0318 04:35:49.180218   17322 logs.go:278] No container was found matching "kindnet"
	I0318 04:35:49.180270   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:35:49.192135   17322 logs.go:276] 2 containers: [1fb3e049cd41 f4f9d7351a87]
	I0318 04:35:49.192177   17322 logs.go:123] Gathering logs for kube-apiserver [8cbe0799ab57] ...
	I0318 04:35:49.192183   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cbe0799ab57"
	I0318 04:35:49.206141   17322 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:35:49.206153   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:35:49.239908   17322 logs.go:123] Gathering logs for etcd [3597b574be66] ...
	I0318 04:35:49.239920   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3597b574be66"
	I0318 04:35:49.253779   17322 logs.go:123] Gathering logs for kube-proxy [313af99b8193] ...
	I0318 04:35:49.253789   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313af99b8193"
	I0318 04:35:49.265275   17322 logs.go:123] Gathering logs for kube-controller-manager [c9b5a8296878] ...
	I0318 04:35:49.265284   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b5a8296878"
	I0318 04:35:49.283357   17322 logs.go:123] Gathering logs for storage-provisioner [1fb3e049cd41] ...
	I0318 04:35:49.283366   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb3e049cd41"
	I0318 04:35:49.294474   17322 logs.go:123] Gathering logs for storage-provisioner [f4f9d7351a87] ...
	I0318 04:35:49.294484   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4f9d7351a87"
	I0318 04:35:49.305494   17322 logs.go:123] Gathering logs for Docker ...
	I0318 04:35:49.305506   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:35:49.331109   17322 logs.go:123] Gathering logs for container status ...
	I0318 04:35:49.331115   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:35:49.344024   17322 logs.go:123] Gathering logs for kubelet ...
	I0318 04:35:49.344035   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 04:35:49.384270   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:35:49.384372   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:35:49.386098   17322 logs.go:123] Gathering logs for kube-apiserver [7147d93a4ffc] ...
	I0318 04:35:49.386103   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7147d93a4ffc"
	I0318 04:35:49.414613   17322 logs.go:123] Gathering logs for kube-controller-manager [eca65001ee00] ...
	I0318 04:35:49.414630   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca65001ee00"
	I0318 04:35:49.426154   17322 logs.go:123] Gathering logs for dmesg ...
	I0318 04:35:49.426169   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:35:49.430345   17322 logs.go:123] Gathering logs for etcd [c52784390773] ...
	I0318 04:35:49.430352   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52784390773"
	I0318 04:35:49.444706   17322 logs.go:123] Gathering logs for coredns [9445dc83224c] ...
	I0318 04:35:49.444723   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9445dc83224c"
	I0318 04:35:49.455890   17322 logs.go:123] Gathering logs for kube-scheduler [083e5435c9c9] ...
	I0318 04:35:49.455900   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083e5435c9c9"
	I0318 04:35:49.467302   17322 logs.go:123] Gathering logs for kube-scheduler [2ed27f531543] ...
	I0318 04:35:49.467310   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed27f531543"
	I0318 04:35:49.481072   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:35:49.481081   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 04:35:49.481106   17322 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0318 04:35:49.481112   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	  Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:35:49.481117   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	  Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:35:49.481121   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:35:49.481124   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:35:59.482516   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:36:04.485246   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:36:04.485647   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:36:04.523398   17322 logs.go:276] 2 containers: [8cbe0799ab57 7147d93a4ffc]
	I0318 04:36:04.523537   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:36:04.544482   17322 logs.go:276] 2 containers: [3597b574be66 c52784390773]
	I0318 04:36:04.544595   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:36:04.562573   17322 logs.go:276] 1 containers: [9445dc83224c]
	I0318 04:36:04.562647   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:36:04.574792   17322 logs.go:276] 2 containers: [083e5435c9c9 2ed27f531543]
	I0318 04:36:04.574864   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:36:04.585612   17322 logs.go:276] 1 containers: [313af99b8193]
	I0318 04:36:04.585674   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:36:04.598128   17322 logs.go:276] 2 containers: [c9b5a8296878 eca65001ee00]
	I0318 04:36:04.598199   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:36:04.608551   17322 logs.go:276] 0 containers: []
	W0318 04:36:04.608565   17322 logs.go:278] No container was found matching "kindnet"
	I0318 04:36:04.608637   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:36:04.618927   17322 logs.go:276] 2 containers: [1fb3e049cd41 f4f9d7351a87]
	I0318 04:36:04.618946   17322 logs.go:123] Gathering logs for dmesg ...
	I0318 04:36:04.618951   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:36:04.623779   17322 logs.go:123] Gathering logs for etcd [3597b574be66] ...
	I0318 04:36:04.623784   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3597b574be66"
	I0318 04:36:04.638037   17322 logs.go:123] Gathering logs for kube-controller-manager [c9b5a8296878] ...
	I0318 04:36:04.638047   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b5a8296878"
	I0318 04:36:04.655178   17322 logs.go:123] Gathering logs for kube-controller-manager [eca65001ee00] ...
	I0318 04:36:04.655190   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca65001ee00"
	I0318 04:36:04.668490   17322 logs.go:123] Gathering logs for Docker ...
	I0318 04:36:04.668499   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:36:04.694318   17322 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:36:04.694326   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:36:04.731374   17322 logs.go:123] Gathering logs for kube-apiserver [7147d93a4ffc] ...
	I0318 04:36:04.731386   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7147d93a4ffc"
	I0318 04:36:04.755264   17322 logs.go:123] Gathering logs for kube-scheduler [083e5435c9c9] ...
	I0318 04:36:04.755276   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083e5435c9c9"
	I0318 04:36:04.767294   17322 logs.go:123] Gathering logs for storage-provisioner [1fb3e049cd41] ...
	I0318 04:36:04.767308   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb3e049cd41"
	I0318 04:36:04.778270   17322 logs.go:123] Gathering logs for storage-provisioner [f4f9d7351a87] ...
	I0318 04:36:04.778283   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4f9d7351a87"
	I0318 04:36:04.789048   17322 logs.go:123] Gathering logs for kubelet ...
	I0318 04:36:04.789059   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 04:36:04.826912   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:36:04.827006   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:36:04.828692   17322 logs.go:123] Gathering logs for container status ...
	I0318 04:36:04.828699   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:36:04.839997   17322 logs.go:123] Gathering logs for kube-apiserver [8cbe0799ab57] ...
	I0318 04:36:04.840009   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cbe0799ab57"
	I0318 04:36:04.853655   17322 logs.go:123] Gathering logs for etcd [c52784390773] ...
	I0318 04:36:04.853667   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52784390773"
	I0318 04:36:04.867986   17322 logs.go:123] Gathering logs for coredns [9445dc83224c] ...
	I0318 04:36:04.867998   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9445dc83224c"
	I0318 04:36:04.883202   17322 logs.go:123] Gathering logs for kube-scheduler [2ed27f531543] ...
	I0318 04:36:04.883215   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed27f531543"
	I0318 04:36:04.904571   17322 logs.go:123] Gathering logs for kube-proxy [313af99b8193] ...
	I0318 04:36:04.904581   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313af99b8193"
	I0318 04:36:04.916373   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:36:04.916385   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 04:36:04.916413   17322 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0318 04:36:04.916417   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	  Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:36:04.916421   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	  Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:36:04.916427   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:36:04.916431   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:36:14.920258   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:36:19.922471   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:36:19.922880   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:36:19.962618   17322 logs.go:276] 2 containers: [8cbe0799ab57 7147d93a4ffc]
	I0318 04:36:19.962762   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:36:19.984795   17322 logs.go:276] 2 containers: [3597b574be66 c52784390773]
	I0318 04:36:19.984931   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:36:20.000826   17322 logs.go:276] 1 containers: [9445dc83224c]
	I0318 04:36:20.000909   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:36:20.013592   17322 logs.go:276] 2 containers: [083e5435c9c9 2ed27f531543]
	I0318 04:36:20.013678   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:36:20.028443   17322 logs.go:276] 1 containers: [313af99b8193]
	I0318 04:36:20.028512   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:36:20.039083   17322 logs.go:276] 2 containers: [c9b5a8296878 eca65001ee00]
	I0318 04:36:20.039155   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:36:20.049225   17322 logs.go:276] 0 containers: []
	W0318 04:36:20.049235   17322 logs.go:278] No container was found matching "kindnet"
	I0318 04:36:20.049290   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:36:20.060142   17322 logs.go:276] 2 containers: [1fb3e049cd41 f4f9d7351a87]
	I0318 04:36:20.060162   17322 logs.go:123] Gathering logs for kube-apiserver [7147d93a4ffc] ...
	I0318 04:36:20.060181   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7147d93a4ffc"
	I0318 04:36:20.084384   17322 logs.go:123] Gathering logs for etcd [3597b574be66] ...
	I0318 04:36:20.084394   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3597b574be66"
	I0318 04:36:20.101751   17322 logs.go:123] Gathering logs for coredns [9445dc83224c] ...
	I0318 04:36:20.101763   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9445dc83224c"
	I0318 04:36:20.113280   17322 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:36:20.113291   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:36:20.150061   17322 logs.go:123] Gathering logs for kube-controller-manager [c9b5a8296878] ...
	I0318 04:36:20.150075   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b5a8296878"
	I0318 04:36:20.168066   17322 logs.go:123] Gathering logs for kube-controller-manager [eca65001ee00] ...
	I0318 04:36:20.168075   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca65001ee00"
	I0318 04:36:20.179815   17322 logs.go:123] Gathering logs for storage-provisioner [1fb3e049cd41] ...
	I0318 04:36:20.179826   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb3e049cd41"
	I0318 04:36:20.190977   17322 logs.go:123] Gathering logs for storage-provisioner [f4f9d7351a87] ...
	I0318 04:36:20.190988   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4f9d7351a87"
	I0318 04:36:20.202680   17322 logs.go:123] Gathering logs for dmesg ...
	I0318 04:36:20.202691   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:36:20.207215   17322 logs.go:123] Gathering logs for kube-apiserver [8cbe0799ab57] ...
	I0318 04:36:20.207224   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cbe0799ab57"
	I0318 04:36:20.223185   17322 logs.go:123] Gathering logs for etcd [c52784390773] ...
	I0318 04:36:20.223199   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52784390773"
	I0318 04:36:20.237635   17322 logs.go:123] Gathering logs for kube-proxy [313af99b8193] ...
	I0318 04:36:20.237648   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313af99b8193"
	I0318 04:36:20.248907   17322 logs.go:123] Gathering logs for container status ...
	I0318 04:36:20.248919   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:36:20.261860   17322 logs.go:123] Gathering logs for kubelet ...
	I0318 04:36:20.261872   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 04:36:20.302148   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:36:20.302243   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:36:20.303896   17322 logs.go:123] Gathering logs for kube-scheduler [083e5435c9c9] ...
	I0318 04:36:20.303901   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083e5435c9c9"
	I0318 04:36:20.315335   17322 logs.go:123] Gathering logs for kube-scheduler [2ed27f531543] ...
	I0318 04:36:20.315345   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed27f531543"
	I0318 04:36:20.330130   17322 logs.go:123] Gathering logs for Docker ...
	I0318 04:36:20.330139   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:36:20.354460   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:36:20.354467   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 04:36:20.354489   17322 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0318 04:36:20.354494   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	  Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:36:20.354512   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	  Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:36:20.354526   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:36:20.354532   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:36:30.358308   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:36:35.360447   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:36:35.360839   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:36:35.396130   17322 logs.go:276] 2 containers: [8cbe0799ab57 7147d93a4ffc]
	I0318 04:36:35.396272   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:36:35.416069   17322 logs.go:276] 2 containers: [3597b574be66 c52784390773]
	I0318 04:36:35.416154   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:36:35.430348   17322 logs.go:276] 1 containers: [9445dc83224c]
	I0318 04:36:35.430414   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:36:35.442770   17322 logs.go:276] 2 containers: [083e5435c9c9 2ed27f531543]
	I0318 04:36:35.442851   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:36:35.453392   17322 logs.go:276] 1 containers: [313af99b8193]
	I0318 04:36:35.453473   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:36:35.463708   17322 logs.go:276] 2 containers: [c9b5a8296878 eca65001ee00]
	I0318 04:36:35.463775   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:36:35.473581   17322 logs.go:276] 0 containers: []
	W0318 04:36:35.473592   17322 logs.go:278] No container was found matching "kindnet"
	I0318 04:36:35.473657   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:36:35.484182   17322 logs.go:276] 2 containers: [1fb3e049cd41 f4f9d7351a87]
	I0318 04:36:35.484200   17322 logs.go:123] Gathering logs for storage-provisioner [1fb3e049cd41] ...
	I0318 04:36:35.484205   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb3e049cd41"
	I0318 04:36:35.496143   17322 logs.go:123] Gathering logs for kube-apiserver [8cbe0799ab57] ...
	I0318 04:36:35.496155   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cbe0799ab57"
	I0318 04:36:35.509980   17322 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:36:35.509993   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:36:35.545013   17322 logs.go:123] Gathering logs for kube-controller-manager [eca65001ee00] ...
	I0318 04:36:35.545026   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca65001ee00"
	I0318 04:36:35.556774   17322 logs.go:123] Gathering logs for Docker ...
	I0318 04:36:35.556784   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:36:35.581587   17322 logs.go:123] Gathering logs for dmesg ...
	I0318 04:36:35.581599   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:36:35.586228   17322 logs.go:123] Gathering logs for kube-scheduler [083e5435c9c9] ...
	I0318 04:36:35.586236   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083e5435c9c9"
	I0318 04:36:35.597779   17322 logs.go:123] Gathering logs for kube-proxy [313af99b8193] ...
	I0318 04:36:35.597792   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313af99b8193"
	I0318 04:36:35.609379   17322 logs.go:123] Gathering logs for kube-controller-manager [c9b5a8296878] ...
	I0318 04:36:35.609394   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b5a8296878"
	I0318 04:36:35.626741   17322 logs.go:123] Gathering logs for container status ...
	I0318 04:36:35.626753   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:36:35.638069   17322 logs.go:123] Gathering logs for coredns [9445dc83224c] ...
	I0318 04:36:35.638081   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9445dc83224c"
	I0318 04:36:35.649616   17322 logs.go:123] Gathering logs for kube-apiserver [7147d93a4ffc] ...
	I0318 04:36:35.649627   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7147d93a4ffc"
	I0318 04:36:35.673725   17322 logs.go:123] Gathering logs for etcd [3597b574be66] ...
	I0318 04:36:35.673743   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3597b574be66"
	I0318 04:36:35.687830   17322 logs.go:123] Gathering logs for etcd [c52784390773] ...
	I0318 04:36:35.687843   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52784390773"
	I0318 04:36:35.703489   17322 logs.go:123] Gathering logs for kube-scheduler [2ed27f531543] ...
	I0318 04:36:35.703501   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed27f531543"
	I0318 04:36:35.740125   17322 logs.go:123] Gathering logs for storage-provisioner [f4f9d7351a87] ...
	I0318 04:36:35.740137   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4f9d7351a87"
	I0318 04:36:35.760997   17322 logs.go:123] Gathering logs for kubelet ...
	I0318 04:36:35.761009   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 04:36:35.798088   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:36:35.798182   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:36:35.799925   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:36:35.799934   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 04:36:35.799956   17322 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0318 04:36:35.799961   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	  Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:36:35.799965   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	  Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:36:35.799973   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:36:35.799976   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:36:45.803772   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:36:50.805977   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:36:50.806364   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:36:50.839957   17322 logs.go:276] 2 containers: [8cbe0799ab57 7147d93a4ffc]
	I0318 04:36:50.840112   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:36:50.860222   17322 logs.go:276] 2 containers: [3597b574be66 c52784390773]
	I0318 04:36:50.860324   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:36:50.876483   17322 logs.go:276] 1 containers: [9445dc83224c]
	I0318 04:36:50.876565   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:36:50.888857   17322 logs.go:276] 2 containers: [083e5435c9c9 2ed27f531543]
	I0318 04:36:50.888929   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:36:50.899220   17322 logs.go:276] 1 containers: [313af99b8193]
	I0318 04:36:50.899300   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:36:50.909784   17322 logs.go:276] 2 containers: [c9b5a8296878 eca65001ee00]
	I0318 04:36:50.909847   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:36:50.920017   17322 logs.go:276] 0 containers: []
	W0318 04:36:50.920030   17322 logs.go:278] No container was found matching "kindnet"
	I0318 04:36:50.920091   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:36:50.930505   17322 logs.go:276] 2 containers: [1fb3e049cd41 f4f9d7351a87]
	I0318 04:36:50.930524   17322 logs.go:123] Gathering logs for coredns [9445dc83224c] ...
	I0318 04:36:50.930530   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9445dc83224c"
	I0318 04:36:50.942726   17322 logs.go:123] Gathering logs for kube-scheduler [083e5435c9c9] ...
	I0318 04:36:50.942738   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083e5435c9c9"
	I0318 04:36:50.958010   17322 logs.go:123] Gathering logs for storage-provisioner [f4f9d7351a87] ...
	I0318 04:36:50.958021   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4f9d7351a87"
	I0318 04:36:50.969592   17322 logs.go:123] Gathering logs for kube-apiserver [7147d93a4ffc] ...
	I0318 04:36:50.969603   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7147d93a4ffc"
	I0318 04:36:50.995008   17322 logs.go:123] Gathering logs for kube-scheduler [2ed27f531543] ...
	I0318 04:36:50.995018   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed27f531543"
	I0318 04:36:51.009488   17322 logs.go:123] Gathering logs for kube-proxy [313af99b8193] ...
	I0318 04:36:51.009498   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313af99b8193"
	I0318 04:36:51.021813   17322 logs.go:123] Gathering logs for Docker ...
	I0318 04:36:51.021824   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:36:51.046707   17322 logs.go:123] Gathering logs for kubelet ...
	I0318 04:36:51.046715   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 04:36:51.084443   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:36:51.084537   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:36:51.086224   17322 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:36:51.086228   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:36:51.120882   17322 logs.go:123] Gathering logs for etcd [3597b574be66] ...
	I0318 04:36:51.120892   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3597b574be66"
	I0318 04:36:51.140574   17322 logs.go:123] Gathering logs for storage-provisioner [1fb3e049cd41] ...
	I0318 04:36:51.140584   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb3e049cd41"
	I0318 04:36:51.152550   17322 logs.go:123] Gathering logs for dmesg ...
	I0318 04:36:51.152561   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:36:51.157467   17322 logs.go:123] Gathering logs for kube-apiserver [8cbe0799ab57] ...
	I0318 04:36:51.157474   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cbe0799ab57"
	I0318 04:36:51.174355   17322 logs.go:123] Gathering logs for etcd [c52784390773] ...
	I0318 04:36:51.174365   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52784390773"
	I0318 04:36:51.188478   17322 logs.go:123] Gathering logs for kube-controller-manager [c9b5a8296878] ...
	I0318 04:36:51.188489   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b5a8296878"
	I0318 04:36:51.206223   17322 logs.go:123] Gathering logs for kube-controller-manager [eca65001ee00] ...
	I0318 04:36:51.206233   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca65001ee00"
	I0318 04:36:51.219618   17322 logs.go:123] Gathering logs for container status ...
	I0318 04:36:51.219629   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:36:51.232047   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:36:51.232057   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 04:36:51.232085   17322 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0318 04:36:51.232089   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	  Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:36:51.232093   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	  Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:36:51.232098   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:36:51.232101   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:37:01.236065   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:37:06.238540   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:37:06.238711   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:37:06.250377   17322 logs.go:276] 2 containers: [8cbe0799ab57 7147d93a4ffc]
	I0318 04:37:06.250465   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:37:06.262400   17322 logs.go:276] 2 containers: [3597b574be66 c52784390773]
	I0318 04:37:06.262493   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:37:06.273174   17322 logs.go:276] 1 containers: [9445dc83224c]
	I0318 04:37:06.273246   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:37:06.283707   17322 logs.go:276] 2 containers: [083e5435c9c9 2ed27f531543]
	I0318 04:37:06.283786   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:37:06.298365   17322 logs.go:276] 1 containers: [313af99b8193]
	I0318 04:37:06.298442   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:37:06.308876   17322 logs.go:276] 2 containers: [c9b5a8296878 eca65001ee00]
	I0318 04:37:06.308942   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:37:06.319551   17322 logs.go:276] 0 containers: []
	W0318 04:37:06.319562   17322 logs.go:278] No container was found matching "kindnet"
	I0318 04:37:06.319621   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:37:06.330090   17322 logs.go:276] 2 containers: [1fb3e049cd41 f4f9d7351a87]
	I0318 04:37:06.330109   17322 logs.go:123] Gathering logs for kube-controller-manager [eca65001ee00] ...
	I0318 04:37:06.330115   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca65001ee00"
	I0318 04:37:06.342039   17322 logs.go:123] Gathering logs for storage-provisioner [1fb3e049cd41] ...
	I0318 04:37:06.342048   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb3e049cd41"
	I0318 04:37:06.353519   17322 logs.go:123] Gathering logs for storage-provisioner [f4f9d7351a87] ...
	I0318 04:37:06.353528   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4f9d7351a87"
	I0318 04:37:06.364887   17322 logs.go:123] Gathering logs for container status ...
	I0318 04:37:06.364898   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:37:06.379888   17322 logs.go:123] Gathering logs for kube-apiserver [8cbe0799ab57] ...
	I0318 04:37:06.379900   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cbe0799ab57"
	I0318 04:37:06.394328   17322 logs.go:123] Gathering logs for etcd [3597b574be66] ...
	I0318 04:37:06.394341   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3597b574be66"
	I0318 04:37:06.408456   17322 logs.go:123] Gathering logs for etcd [c52784390773] ...
	I0318 04:37:06.408467   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52784390773"
	I0318 04:37:06.426467   17322 logs.go:123] Gathering logs for coredns [9445dc83224c] ...
	I0318 04:37:06.426478   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9445dc83224c"
	I0318 04:37:06.437893   17322 logs.go:123] Gathering logs for kube-scheduler [2ed27f531543] ...
	I0318 04:37:06.437904   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed27f531543"
	I0318 04:37:06.452568   17322 logs.go:123] Gathering logs for kube-proxy [313af99b8193] ...
	I0318 04:37:06.452580   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313af99b8193"
	I0318 04:37:06.474783   17322 logs.go:123] Gathering logs for kubelet ...
	I0318 04:37:06.474795   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 04:37:06.512699   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:37:06.512792   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:37:06.514490   17322 logs.go:123] Gathering logs for dmesg ...
	I0318 04:37:06.514499   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:37:06.518628   17322 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:37:06.518637   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:37:06.554459   17322 logs.go:123] Gathering logs for kube-scheduler [083e5435c9c9] ...
	I0318 04:37:06.554468   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083e5435c9c9"
	I0318 04:37:06.570709   17322 logs.go:123] Gathering logs for kube-apiserver [7147d93a4ffc] ...
	I0318 04:37:06.570719   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7147d93a4ffc"
	I0318 04:37:06.600141   17322 logs.go:123] Gathering logs for Docker ...
	I0318 04:37:06.600151   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:37:06.624001   17322 logs.go:123] Gathering logs for kube-controller-manager [c9b5a8296878] ...
	I0318 04:37:06.624013   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b5a8296878"
	I0318 04:37:06.641560   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:37:06.641570   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 04:37:06.641603   17322 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0318 04:37:06.641608   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	  Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:37:06.641614   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	  Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:37:06.641620   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:37:06.641623   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:37:16.645451   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:37:21.647541   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:37:21.647643   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:37:21.659480   17322 logs.go:276] 2 containers: [8cbe0799ab57 7147d93a4ffc]
	I0318 04:37:21.659549   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:37:21.671142   17322 logs.go:276] 2 containers: [3597b574be66 c52784390773]
	I0318 04:37:21.671212   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:37:21.682066   17322 logs.go:276] 1 containers: [9445dc83224c]
	I0318 04:37:21.682138   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:37:21.695307   17322 logs.go:276] 2 containers: [083e5435c9c9 2ed27f531543]
	I0318 04:37:21.695376   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:37:21.706686   17322 logs.go:276] 1 containers: [313af99b8193]
	I0318 04:37:21.706778   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:37:21.717958   17322 logs.go:276] 2 containers: [c9b5a8296878 eca65001ee00]
	I0318 04:37:21.718029   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:37:21.728237   17322 logs.go:276] 0 containers: []
	W0318 04:37:21.728247   17322 logs.go:278] No container was found matching "kindnet"
	I0318 04:37:21.728303   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:37:21.739135   17322 logs.go:276] 2 containers: [1fb3e049cd41 f4f9d7351a87]
	I0318 04:37:21.739179   17322 logs.go:123] Gathering logs for etcd [3597b574be66] ...
	I0318 04:37:21.739190   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3597b574be66"
	I0318 04:37:21.753235   17322 logs.go:123] Gathering logs for coredns [9445dc83224c] ...
	I0318 04:37:21.753249   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9445dc83224c"
	I0318 04:37:21.764401   17322 logs.go:123] Gathering logs for kube-proxy [313af99b8193] ...
	I0318 04:37:21.764410   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313af99b8193"
	I0318 04:37:21.778427   17322 logs.go:123] Gathering logs for kube-controller-manager [eca65001ee00] ...
	I0318 04:37:21.778435   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca65001ee00"
	I0318 04:37:21.789670   17322 logs.go:123] Gathering logs for dmesg ...
	I0318 04:37:21.789680   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:37:21.793875   17322 logs.go:123] Gathering logs for kube-apiserver [8cbe0799ab57] ...
	I0318 04:37:21.793881   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cbe0799ab57"
	I0318 04:37:21.810245   17322 logs.go:123] Gathering logs for kube-scheduler [083e5435c9c9] ...
	I0318 04:37:21.810256   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083e5435c9c9"
	I0318 04:37:21.822907   17322 logs.go:123] Gathering logs for storage-provisioner [1fb3e049cd41] ...
	I0318 04:37:21.822922   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb3e049cd41"
	I0318 04:37:21.834394   17322 logs.go:123] Gathering logs for Docker ...
	I0318 04:37:21.834410   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:37:21.857343   17322 logs.go:123] Gathering logs for container status ...
	I0318 04:37:21.857351   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:37:21.869015   17322 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:37:21.869026   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:37:21.907930   17322 logs.go:123] Gathering logs for kube-apiserver [7147d93a4ffc] ...
	I0318 04:37:21.907944   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7147d93a4ffc"
	I0318 04:37:21.933596   17322 logs.go:123] Gathering logs for storage-provisioner [f4f9d7351a87] ...
	I0318 04:37:21.933606   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4f9d7351a87"
	I0318 04:37:21.944424   17322 logs.go:123] Gathering logs for kubelet ...
	I0318 04:37:21.944436   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 04:37:21.982692   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:37:21.982792   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:37:21.984440   17322 logs.go:123] Gathering logs for kube-controller-manager [c9b5a8296878] ...
	I0318 04:37:21.984446   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b5a8296878"
	I0318 04:37:22.004525   17322 logs.go:123] Gathering logs for etcd [c52784390773] ...
	I0318 04:37:22.004537   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52784390773"
	I0318 04:37:22.018670   17322 logs.go:123] Gathering logs for kube-scheduler [2ed27f531543] ...
	I0318 04:37:22.018679   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed27f531543"
	I0318 04:37:22.032847   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:37:22.032860   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 04:37:22.032886   17322 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0318 04:37:22.032889   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	  Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:37:22.032894   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	  Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:37:22.032940   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:37:22.032946   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:37:32.035317   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:37:37.037357   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:37:37.037446   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:37:37.048903   17322 logs.go:276] 2 containers: [8cbe0799ab57 7147d93a4ffc]
	I0318 04:37:37.048971   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:37:37.058802   17322 logs.go:276] 2 containers: [3597b574be66 c52784390773]
	I0318 04:37:37.058875   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:37:37.070571   17322 logs.go:276] 1 containers: [9445dc83224c]
	I0318 04:37:37.070648   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:37:37.084792   17322 logs.go:276] 2 containers: [083e5435c9c9 2ed27f531543]
	I0318 04:37:37.084867   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:37:37.095303   17322 logs.go:276] 1 containers: [313af99b8193]
	I0318 04:37:37.095382   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:37:37.106158   17322 logs.go:276] 2 containers: [c9b5a8296878 eca65001ee00]
	I0318 04:37:37.106230   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:37:37.117755   17322 logs.go:276] 0 containers: []
	W0318 04:37:37.117769   17322 logs.go:278] No container was found matching "kindnet"
	I0318 04:37:37.117830   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:37:37.131150   17322 logs.go:276] 2 containers: [1fb3e049cd41 f4f9d7351a87]
	I0318 04:37:37.131169   17322 logs.go:123] Gathering logs for storage-provisioner [1fb3e049cd41] ...
	I0318 04:37:37.131174   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb3e049cd41"
	I0318 04:37:37.148569   17322 logs.go:123] Gathering logs for kube-apiserver [8cbe0799ab57] ...
	I0318 04:37:37.148579   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cbe0799ab57"
	I0318 04:37:37.163086   17322 logs.go:123] Gathering logs for etcd [c52784390773] ...
	I0318 04:37:37.163098   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52784390773"
	I0318 04:37:37.177809   17322 logs.go:123] Gathering logs for etcd [3597b574be66] ...
	I0318 04:37:37.177819   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3597b574be66"
	I0318 04:37:37.191415   17322 logs.go:123] Gathering logs for coredns [9445dc83224c] ...
	I0318 04:37:37.191425   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9445dc83224c"
	I0318 04:37:37.203275   17322 logs.go:123] Gathering logs for dmesg ...
	I0318 04:37:37.203285   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:37:37.210530   17322 logs.go:123] Gathering logs for kube-apiserver [7147d93a4ffc] ...
	I0318 04:37:37.210541   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7147d93a4ffc"
	I0318 04:37:37.234522   17322 logs.go:123] Gathering logs for kube-scheduler [083e5435c9c9] ...
	I0318 04:37:37.234533   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083e5435c9c9"
	I0318 04:37:37.246368   17322 logs.go:123] Gathering logs for kube-controller-manager [eca65001ee00] ...
	I0318 04:37:37.246381   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca65001ee00"
	I0318 04:37:37.257867   17322 logs.go:123] Gathering logs for Docker ...
	I0318 04:37:37.257879   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:37:37.282486   17322 logs.go:123] Gathering logs for container status ...
	I0318 04:37:37.282498   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:37:37.300718   17322 logs.go:123] Gathering logs for kubelet ...
	I0318 04:37:37.300728   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 04:37:37.337727   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:37:37.337819   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:37:37.339484   17322 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:37:37.339490   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:37:37.376993   17322 logs.go:123] Gathering logs for kube-controller-manager [c9b5a8296878] ...
	I0318 04:37:37.377006   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b5a8296878"
	I0318 04:37:37.394337   17322 logs.go:123] Gathering logs for storage-provisioner [f4f9d7351a87] ...
	I0318 04:37:37.394352   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4f9d7351a87"
	I0318 04:37:37.405539   17322 logs.go:123] Gathering logs for kube-scheduler [2ed27f531543] ...
	I0318 04:37:37.405552   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed27f531543"
	I0318 04:37:37.419769   17322 logs.go:123] Gathering logs for kube-proxy [313af99b8193] ...
	I0318 04:37:37.419779   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313af99b8193"
	I0318 04:37:37.431499   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:37:37.431510   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 04:37:37.431536   17322 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0318 04:37:37.431542   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	  Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:37:37.431554   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	  Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:37:37.431558   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:37:37.431561   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:37:47.435393   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:37:52.437541   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:37:52.437723   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:37:52.456775   17322 logs.go:276] 2 containers: [8cbe0799ab57 7147d93a4ffc]
	I0318 04:37:52.456875   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:37:52.499627   17322 logs.go:276] 2 containers: [3597b574be66 c52784390773]
	I0318 04:37:52.499700   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:37:52.510707   17322 logs.go:276] 1 containers: [9445dc83224c]
	I0318 04:37:52.510783   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:37:52.522483   17322 logs.go:276] 2 containers: [083e5435c9c9 2ed27f531543]
	I0318 04:37:52.522560   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:37:52.532755   17322 logs.go:276] 1 containers: [313af99b8193]
	I0318 04:37:52.532824   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:37:52.543008   17322 logs.go:276] 2 containers: [c9b5a8296878 eca65001ee00]
	I0318 04:37:52.543078   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:37:52.553423   17322 logs.go:276] 0 containers: []
	W0318 04:37:52.553434   17322 logs.go:278] No container was found matching "kindnet"
	I0318 04:37:52.553493   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:37:52.565597   17322 logs.go:276] 2 containers: [1fb3e049cd41 f4f9d7351a87]
	I0318 04:37:52.565611   17322 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:37:52.565616   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:37:52.602402   17322 logs.go:123] Gathering logs for kube-apiserver [7147d93a4ffc] ...
	I0318 04:37:52.602413   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7147d93a4ffc"
	I0318 04:37:52.626705   17322 logs.go:123] Gathering logs for etcd [c52784390773] ...
	I0318 04:37:52.626715   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52784390773"
	I0318 04:37:52.641824   17322 logs.go:123] Gathering logs for kubelet ...
	I0318 04:37:52.641838   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 04:37:52.681526   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:37:52.681630   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:37:52.683370   17322 logs.go:123] Gathering logs for dmesg ...
	I0318 04:37:52.683377   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:37:52.687820   17322 logs.go:123] Gathering logs for kube-apiserver [8cbe0799ab57] ...
	I0318 04:37:52.687829   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cbe0799ab57"
	I0318 04:37:52.702150   17322 logs.go:123] Gathering logs for coredns [9445dc83224c] ...
	I0318 04:37:52.702159   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9445dc83224c"
	I0318 04:37:52.717290   17322 logs.go:123] Gathering logs for storage-provisioner [f4f9d7351a87] ...
	I0318 04:37:52.717300   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4f9d7351a87"
	I0318 04:37:52.728582   17322 logs.go:123] Gathering logs for etcd [3597b574be66] ...
	I0318 04:37:52.728596   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3597b574be66"
	I0318 04:37:52.742638   17322 logs.go:123] Gathering logs for kube-scheduler [2ed27f531543] ...
	I0318 04:37:52.742651   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed27f531543"
	I0318 04:37:52.757828   17322 logs.go:123] Gathering logs for kube-proxy [313af99b8193] ...
	I0318 04:37:52.757841   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313af99b8193"
	I0318 04:37:52.769569   17322 logs.go:123] Gathering logs for kube-controller-manager [c9b5a8296878] ...
	I0318 04:37:52.769586   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b5a8296878"
	I0318 04:37:52.786986   17322 logs.go:123] Gathering logs for storage-provisioner [1fb3e049cd41] ...
	I0318 04:37:52.786997   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb3e049cd41"
	I0318 04:37:52.798903   17322 logs.go:123] Gathering logs for kube-scheduler [083e5435c9c9] ...
	I0318 04:37:52.798915   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083e5435c9c9"
	I0318 04:37:52.809955   17322 logs.go:123] Gathering logs for kube-controller-manager [eca65001ee00] ...
	I0318 04:37:52.809967   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca65001ee00"
	I0318 04:37:52.822173   17322 logs.go:123] Gathering logs for Docker ...
	I0318 04:37:52.822184   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:37:52.850786   17322 logs.go:123] Gathering logs for container status ...
	I0318 04:37:52.850811   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:37:52.868968   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:37:52.868980   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 04:37:52.869010   17322 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0318 04:37:52.869014   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	  Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:37:52.869018   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	  Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:37:52.869047   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:37:52.869051   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:38:02.871289   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:38:07.873548   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:38:07.873930   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:38:07.913114   17322 logs.go:276] 2 containers: [8cbe0799ab57 7147d93a4ffc]
	I0318 04:38:07.913257   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:38:07.934320   17322 logs.go:276] 2 containers: [3597b574be66 c52784390773]
	I0318 04:38:07.934420   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:38:07.949273   17322 logs.go:276] 1 containers: [9445dc83224c]
	I0318 04:38:07.949354   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:38:07.961743   17322 logs.go:276] 2 containers: [083e5435c9c9 2ed27f531543]
	I0318 04:38:07.961821   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:38:07.972723   17322 logs.go:276] 1 containers: [313af99b8193]
	I0318 04:38:07.972813   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:38:07.983344   17322 logs.go:276] 2 containers: [c9b5a8296878 eca65001ee00]
	I0318 04:38:07.983407   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:38:07.993803   17322 logs.go:276] 0 containers: []
	W0318 04:38:07.993815   17322 logs.go:278] No container was found matching "kindnet"
	I0318 04:38:07.993877   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:38:08.004794   17322 logs.go:276] 2 containers: [1fb3e049cd41 f4f9d7351a87]
	I0318 04:38:08.004814   17322 logs.go:123] Gathering logs for etcd [3597b574be66] ...
	I0318 04:38:08.004820   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3597b574be66"
	I0318 04:38:08.022234   17322 logs.go:123] Gathering logs for coredns [9445dc83224c] ...
	I0318 04:38:08.022246   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9445dc83224c"
	I0318 04:38:08.033949   17322 logs.go:123] Gathering logs for kube-scheduler [2ed27f531543] ...
	I0318 04:38:08.033959   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed27f531543"
	I0318 04:38:08.048477   17322 logs.go:123] Gathering logs for kube-controller-manager [c9b5a8296878] ...
	I0318 04:38:08.048489   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b5a8296878"
	I0318 04:38:08.065231   17322 logs.go:123] Gathering logs for Docker ...
	I0318 04:38:08.065243   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:38:08.088089   17322 logs.go:123] Gathering logs for kubelet ...
	I0318 04:38:08.088097   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 04:38:08.124300   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:38:08.124393   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:38:08.126041   17322 logs.go:123] Gathering logs for dmesg ...
	I0318 04:38:08.126046   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:38:08.130285   17322 logs.go:123] Gathering logs for kube-apiserver [7147d93a4ffc] ...
	I0318 04:38:08.130293   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7147d93a4ffc"
	I0318 04:38:08.155591   17322 logs.go:123] Gathering logs for container status ...
	I0318 04:38:08.155602   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:38:08.168224   17322 logs.go:123] Gathering logs for kube-apiserver [8cbe0799ab57] ...
	I0318 04:38:08.168235   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cbe0799ab57"
	I0318 04:38:08.182358   17322 logs.go:123] Gathering logs for etcd [c52784390773] ...
	I0318 04:38:08.182369   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52784390773"
	I0318 04:38:08.196857   17322 logs.go:123] Gathering logs for storage-provisioner [1fb3e049cd41] ...
	I0318 04:38:08.196867   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb3e049cd41"
	I0318 04:38:08.214086   17322 logs.go:123] Gathering logs for kube-controller-manager [eca65001ee00] ...
	I0318 04:38:08.214096   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca65001ee00"
	I0318 04:38:08.225416   17322 logs.go:123] Gathering logs for storage-provisioner [f4f9d7351a87] ...
	I0318 04:38:08.225426   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4f9d7351a87"
	I0318 04:38:08.237007   17322 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:38:08.237018   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:38:08.277089   17322 logs.go:123] Gathering logs for kube-scheduler [083e5435c9c9] ...
	I0318 04:38:08.277103   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083e5435c9c9"
	I0318 04:38:08.289241   17322 logs.go:123] Gathering logs for kube-proxy [313af99b8193] ...
	I0318 04:38:08.289252   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313af99b8193"
	I0318 04:38:08.301181   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:38:08.301191   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 04:38:08.301220   17322 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0318 04:38:08.301225   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	  Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:38:08.301228   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	  Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:38:08.301233   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:38:08.301236   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:38:18.305062   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:38:23.305969   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:38:23.306068   17322 kubeadm.go:591] duration metric: took 4m7.788113917s to restartPrimaryControlPlane
	W0318 04:38:23.306137   17322 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 04:38:23.306168   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0318 04:38:24.309637   17322 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.0034915s)
	I0318 04:38:24.309704   17322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 04:38:24.314619   17322 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 04:38:24.317434   17322 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 04:38:24.320096   17322 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 04:38:24.320103   17322 kubeadm.go:156] found existing configuration files:
	
	I0318 04:38:24.320127   17322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53312 /etc/kubernetes/admin.conf
	I0318 04:38:24.322850   17322 kubeadm.go:162] "https://control-plane.minikube.internal:53312" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53312 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 04:38:24.322875   17322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 04:38:24.325279   17322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53312 /etc/kubernetes/kubelet.conf
	I0318 04:38:24.328001   17322 kubeadm.go:162] "https://control-plane.minikube.internal:53312" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53312 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 04:38:24.328020   17322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 04:38:24.331161   17322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53312 /etc/kubernetes/controller-manager.conf
	I0318 04:38:24.333778   17322 kubeadm.go:162] "https://control-plane.minikube.internal:53312" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53312 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 04:38:24.333798   17322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 04:38:24.336494   17322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53312 /etc/kubernetes/scheduler.conf
	I0318 04:38:24.339576   17322 kubeadm.go:162] "https://control-plane.minikube.internal:53312" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53312 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 04:38:24.339597   17322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 04:38:24.342251   17322 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 04:38:24.358093   17322 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0318 04:38:24.358134   17322 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 04:38:24.409299   17322 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 04:38:24.409372   17322 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 04:38:24.409417   17322 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 04:38:24.458776   17322 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 04:38:24.465665   17322 out.go:204]   - Generating certificates and keys ...
	I0318 04:38:24.465701   17322 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 04:38:24.465735   17322 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 04:38:24.465777   17322 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 04:38:24.465809   17322 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 04:38:24.465841   17322 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 04:38:24.465878   17322 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 04:38:24.465911   17322 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 04:38:24.465948   17322 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 04:38:24.465981   17322 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 04:38:24.466024   17322 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 04:38:24.466048   17322 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 04:38:24.466075   17322 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 04:38:24.606292   17322 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 04:38:24.696706   17322 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 04:38:24.891031   17322 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 04:38:25.001632   17322 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 04:38:25.031467   17322 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 04:38:25.031857   17322 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 04:38:25.031881   17322 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 04:38:25.108078   17322 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 04:38:25.111424   17322 out.go:204]   - Booting up control plane ...
	I0318 04:38:25.111466   17322 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 04:38:25.111504   17322 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 04:38:25.111535   17322 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 04:38:25.113822   17322 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 04:38:25.114593   17322 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 04:38:29.619871   17322 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.505105 seconds
	I0318 04:38:29.620047   17322 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 04:38:29.629674   17322 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 04:38:30.141842   17322 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 04:38:30.142073   17322 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-738000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 04:38:30.646676   17322 kubeadm.go:309] [bootstrap-token] Using token: 23utxv.u7ge82ksglucw1qd
	I0318 04:38:30.653522   17322 out.go:204]   - Configuring RBAC rules ...
	I0318 04:38:30.653580   17322 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 04:38:30.653621   17322 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 04:38:30.657489   17322 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 04:38:30.658419   17322 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 04:38:30.659158   17322 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 04:38:30.660121   17322 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 04:38:30.663232   17322 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 04:38:30.833329   17322 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 04:38:31.051614   17322 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 04:38:31.052139   17322 kubeadm.go:309] 
	I0318 04:38:31.052172   17322 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 04:38:31.052176   17322 kubeadm.go:309] 
	I0318 04:38:31.052215   17322 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 04:38:31.052219   17322 kubeadm.go:309] 
	I0318 04:38:31.052231   17322 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 04:38:31.052284   17322 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 04:38:31.052311   17322 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 04:38:31.052315   17322 kubeadm.go:309] 
	I0318 04:38:31.052345   17322 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 04:38:31.052349   17322 kubeadm.go:309] 
	I0318 04:38:31.052379   17322 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 04:38:31.052383   17322 kubeadm.go:309] 
	I0318 04:38:31.052408   17322 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 04:38:31.052444   17322 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 04:38:31.052479   17322 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 04:38:31.052483   17322 kubeadm.go:309] 
	I0318 04:38:31.052524   17322 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 04:38:31.052561   17322 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 04:38:31.052564   17322 kubeadm.go:309] 
	I0318 04:38:31.052626   17322 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 23utxv.u7ge82ksglucw1qd \
	I0318 04:38:31.052679   17322 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:2762dffea2ede86231df0e7bc748eefca9b65ca5bd96e5f605bd5b60ef0281dd \
	I0318 04:38:31.052692   17322 kubeadm.go:309] 	--control-plane 
	I0318 04:38:31.052696   17322 kubeadm.go:309] 
	I0318 04:38:31.052738   17322 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 04:38:31.052744   17322 kubeadm.go:309] 
	I0318 04:38:31.052779   17322 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 23utxv.u7ge82ksglucw1qd \
	I0318 04:38:31.052827   17322 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:2762dffea2ede86231df0e7bc748eefca9b65ca5bd96e5f605bd5b60ef0281dd 
	I0318 04:38:31.052882   17322 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 04:38:31.053003   17322 cni.go:84] Creating CNI manager for ""
	I0318 04:38:31.053012   17322 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:38:31.057479   17322 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 04:38:31.069439   17322 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 04:38:31.073061   17322 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 04:38:31.077650   17322 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 04:38:31.077698   17322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 04:38:31.077698   17322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-738000 minikube.k8s.io/updated_at=2024_03_18T04_38_31_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a minikube.k8s.io/name=running-upgrade-738000 minikube.k8s.io/primary=true
	I0318 04:38:31.080695   17322 ops.go:34] apiserver oom_adj: -16
	I0318 04:38:31.125629   17322 kubeadm.go:1107] duration metric: took 47.973666ms to wait for elevateKubeSystemPrivileges
	W0318 04:38:31.125721   17322 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 04:38:31.125727   17322 kubeadm.go:393] duration metric: took 4m15.623333917s to StartCluster
	I0318 04:38:31.125738   17322 settings.go:142] acquiring lock: {Name:mk8634ba9e118796c1213288fbf27edefcbb67ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:38:31.125890   17322 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:38:31.126306   17322 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18429-15072/kubeconfig: {Name:mkeb86e27ccdf30a065b43661cfe2af2dc198b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:38:31.126482   17322 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:38:31.131434   17322 out.go:177] * Verifying Kubernetes components...
	I0318 04:38:31.126567   17322 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 04:38:31.126675   17322 config.go:182] Loaded profile config "running-upgrade-738000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 04:38:31.139380   17322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 04:38:31.139385   17322 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-738000"
	I0318 04:38:31.139388   17322 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-738000"
	I0318 04:38:31.139395   17322 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-738000"
	I0318 04:38:31.139448   17322 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-738000"
	W0318 04:38:31.139452   17322 addons.go:243] addon storage-provisioner should already be in state true
	I0318 04:38:31.139467   17322 host.go:66] Checking if "running-upgrade-738000" exists ...
	I0318 04:38:31.140980   17322 kapi.go:59] client config for running-upgrade-738000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/running-upgrade-738000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/running-upgrade-738000/client.key", CAFile:"/Users/jenkins/minikube-integration/18429-15072/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103bb2a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 04:38:31.141776   17322 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-738000"
	W0318 04:38:31.141784   17322 addons.go:243] addon default-storageclass should already be in state true
	I0318 04:38:31.141794   17322 host.go:66] Checking if "running-upgrade-738000" exists ...
	I0318 04:38:31.146585   17322 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 04:38:31.150422   17322 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 04:38:31.150431   17322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 04:38:31.150440   17322 sshutil.go:53] new ssh client: &{IP:localhost Port:53280 SSHKeyPath:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/running-upgrade-738000/id_rsa Username:docker}
	I0318 04:38:31.151284   17322 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 04:38:31.151292   17322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 04:38:31.151297   17322 sshutil.go:53] new ssh client: &{IP:localhost Port:53280 SSHKeyPath:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/running-upgrade-738000/id_rsa Username:docker}
	I0318 04:38:31.216516   17322 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 04:38:31.221767   17322 api_server.go:52] waiting for apiserver process to appear ...
	I0318 04:38:31.221811   17322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 04:38:31.225549   17322 api_server.go:72] duration metric: took 99.055334ms to wait for apiserver process to appear ...
	I0318 04:38:31.225558   17322 api_server.go:88] waiting for apiserver healthz status ...
	I0318 04:38:31.225564   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:38:31.249463   17322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 04:38:31.254708   17322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 04:38:36.227557   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:38:36.227593   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:38:41.227741   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:38:41.227761   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:38:46.228407   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:38:46.228430   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:38:51.228811   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:38:51.228853   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:38:56.229441   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:38:56.229470   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:39:01.230631   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:39:01.230685   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0318 04:39:01.609657   17322 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0318 04:39:01.618853   17322 out.go:177] * Enabled addons: storage-provisioner
	I0318 04:39:01.626831   17322 addons.go:505] duration metric: took 30.501336917s for enable addons: enabled=[storage-provisioner]
	I0318 04:39:06.232228   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:39:06.232273   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:39:11.234180   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:39:11.234242   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:39:16.234465   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:39:16.234561   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:39:21.236949   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:39:21.236981   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:39:26.239074   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:39:26.239125   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:39:31.241256   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:39:31.241437   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:39:31.252029   17322 logs.go:276] 1 containers: [d454e6154049]
	I0318 04:39:31.252092   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:39:31.262171   17322 logs.go:276] 1 containers: [8046e42578d2]
	I0318 04:39:31.262250   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:39:31.272772   17322 logs.go:276] 2 containers: [367d0316359f 3a24458b86a4]
	I0318 04:39:31.272846   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:39:31.283853   17322 logs.go:276] 1 containers: [894b6a0a0702]
	I0318 04:39:31.283924   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:39:31.294042   17322 logs.go:276] 1 containers: [04d6cdf60161]
	I0318 04:39:31.294142   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:39:31.304914   17322 logs.go:276] 1 containers: [22a920f51952]
	I0318 04:39:31.304986   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:39:31.315434   17322 logs.go:276] 0 containers: []
	W0318 04:39:31.315446   17322 logs.go:278] No container was found matching "kindnet"
	I0318 04:39:31.315505   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:39:31.325872   17322 logs.go:276] 1 containers: [5bfb08f2c96a]
	I0318 04:39:31.325885   17322 logs.go:123] Gathering logs for kubelet ...
	I0318 04:39:31.325890   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 04:39:31.342277   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:39:31.342373   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:39:31.359926   17322 logs.go:123] Gathering logs for dmesg ...
	I0318 04:39:31.359935   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:39:31.364133   17322 logs.go:123] Gathering logs for etcd [8046e42578d2] ...
	I0318 04:39:31.364142   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8046e42578d2"
	I0318 04:39:31.379242   17322 logs.go:123] Gathering logs for coredns [367d0316359f] ...
	I0318 04:39:31.379252   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 367d0316359f"
	I0318 04:39:31.391199   17322 logs.go:123] Gathering logs for kube-scheduler [894b6a0a0702] ...
	I0318 04:39:31.391209   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 894b6a0a0702"
	I0318 04:39:31.405854   17322 logs.go:123] Gathering logs for kube-controller-manager [22a920f51952] ...
	I0318 04:39:31.405868   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22a920f51952"
	I0318 04:39:31.423515   17322 logs.go:123] Gathering logs for Docker ...
	I0318 04:39:31.423525   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:39:31.446939   17322 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:39:31.446948   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:39:31.482387   17322 logs.go:123] Gathering logs for kube-apiserver [d454e6154049] ...
	I0318 04:39:31.482401   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d454e6154049"
	I0318 04:39:31.496896   17322 logs.go:123] Gathering logs for coredns [3a24458b86a4] ...
	I0318 04:39:31.496910   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a24458b86a4"
	I0318 04:39:31.508442   17322 logs.go:123] Gathering logs for kube-proxy [04d6cdf60161] ...
	I0318 04:39:31.508451   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d6cdf60161"
	I0318 04:39:31.519806   17322 logs.go:123] Gathering logs for storage-provisioner [5bfb08f2c96a] ...
	I0318 04:39:31.519815   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bfb08f2c96a"
	I0318 04:39:31.531224   17322 logs.go:123] Gathering logs for container status ...
	I0318 04:39:31.531238   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:39:31.542466   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:39:31.542478   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 04:39:31.542507   17322 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0318 04:39:31.542511   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	  Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:39:31.542515   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	  Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:39:31.542521   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:39:31.542525   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:39:41.545720   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:39:46.547859   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:39:46.548115   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:39:46.572573   17322 logs.go:276] 1 containers: [d454e6154049]
	I0318 04:39:46.572676   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:39:46.588419   17322 logs.go:276] 1 containers: [8046e42578d2]
	I0318 04:39:46.588489   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:39:46.601092   17322 logs.go:276] 2 containers: [367d0316359f 3a24458b86a4]
	I0318 04:39:46.601166   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:39:46.611805   17322 logs.go:276] 1 containers: [894b6a0a0702]
	I0318 04:39:46.611881   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:39:46.622026   17322 logs.go:276] 1 containers: [04d6cdf60161]
	I0318 04:39:46.622104   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:39:46.633297   17322 logs.go:276] 1 containers: [22a920f51952]
	I0318 04:39:46.633366   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:39:46.643450   17322 logs.go:276] 0 containers: []
	W0318 04:39:46.643461   17322 logs.go:278] No container was found matching "kindnet"
	I0318 04:39:46.643521   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:39:46.653669   17322 logs.go:276] 1 containers: [5bfb08f2c96a]
	I0318 04:39:46.653685   17322 logs.go:123] Gathering logs for coredns [367d0316359f] ...
	I0318 04:39:46.653692   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 367d0316359f"
	I0318 04:39:46.665217   17322 logs.go:123] Gathering logs for coredns [3a24458b86a4] ...
	I0318 04:39:46.665228   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a24458b86a4"
	I0318 04:39:46.677135   17322 logs.go:123] Gathering logs for storage-provisioner [5bfb08f2c96a] ...
	I0318 04:39:46.677145   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bfb08f2c96a"
	I0318 04:39:46.688738   17322 logs.go:123] Gathering logs for Docker ...
	I0318 04:39:46.688748   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:39:46.712317   17322 logs.go:123] Gathering logs for container status ...
	I0318 04:39:46.712324   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:39:46.723081   17322 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:39:46.723091   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:39:46.764672   17322 logs.go:123] Gathering logs for etcd [8046e42578d2] ...
	I0318 04:39:46.764682   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8046e42578d2"
	I0318 04:39:46.778975   17322 logs.go:123] Gathering logs for kube-apiserver [d454e6154049] ...
	I0318 04:39:46.778984   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d454e6154049"
	I0318 04:39:46.793092   17322 logs.go:123] Gathering logs for kube-scheduler [894b6a0a0702] ...
	I0318 04:39:46.793102   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 894b6a0a0702"
	I0318 04:39:46.807546   17322 logs.go:123] Gathering logs for kube-proxy [04d6cdf60161] ...
	I0318 04:39:46.807557   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d6cdf60161"
	I0318 04:39:46.823580   17322 logs.go:123] Gathering logs for kube-controller-manager [22a920f51952] ...
	I0318 04:39:46.823592   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22a920f51952"
	I0318 04:39:46.841446   17322 logs.go:123] Gathering logs for kubelet ...
	I0318 04:39:46.841457   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 04:39:46.859220   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:39:46.859314   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:39:46.877128   17322 logs.go:123] Gathering logs for dmesg ...
	I0318 04:39:46.877137   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:39:46.883531   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:39:46.883542   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 04:39:46.883570   17322 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0318 04:39:46.883574   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	  Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:39:46.883577   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	  Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:39:46.883581   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:39:46.883585   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:39:56.887404   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:40:01.889598   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:40:01.889844   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:40:01.913917   17322 logs.go:276] 1 containers: [d454e6154049]
	I0318 04:40:01.914024   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:40:01.930634   17322 logs.go:276] 1 containers: [8046e42578d2]
	I0318 04:40:01.930717   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:40:01.943447   17322 logs.go:276] 2 containers: [367d0316359f 3a24458b86a4]
	I0318 04:40:01.943522   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:40:01.954675   17322 logs.go:276] 1 containers: [894b6a0a0702]
	I0318 04:40:01.954741   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:40:01.965030   17322 logs.go:276] 1 containers: [04d6cdf60161]
	I0318 04:40:01.965106   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:40:01.975254   17322 logs.go:276] 1 containers: [22a920f51952]
	I0318 04:40:01.975322   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:40:01.985083   17322 logs.go:276] 0 containers: []
	W0318 04:40:01.985095   17322 logs.go:278] No container was found matching "kindnet"
	I0318 04:40:01.985155   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:40:01.995400   17322 logs.go:276] 1 containers: [5bfb08f2c96a]
	I0318 04:40:01.995415   17322 logs.go:123] Gathering logs for kubelet ...
	I0318 04:40:01.995420   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 04:40:02.011919   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:40:02.012019   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:40:02.029326   17322 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:40:02.029334   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:40:02.062811   17322 logs.go:123] Gathering logs for coredns [367d0316359f] ...
	I0318 04:40:02.062825   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 367d0316359f"
	I0318 04:40:02.073875   17322 logs.go:123] Gathering logs for kube-scheduler [894b6a0a0702] ...
	I0318 04:40:02.073888   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 894b6a0a0702"
	I0318 04:40:02.088793   17322 logs.go:123] Gathering logs for kube-controller-manager [22a920f51952] ...
	I0318 04:40:02.088803   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22a920f51952"
	I0318 04:40:02.108110   17322 logs.go:123] Gathering logs for Docker ...
	I0318 04:40:02.108121   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:40:02.132105   17322 logs.go:123] Gathering logs for container status ...
	I0318 04:40:02.132114   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:40:02.144546   17322 logs.go:123] Gathering logs for dmesg ...
	I0318 04:40:02.144560   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:40:02.149369   17322 logs.go:123] Gathering logs for kube-apiserver [d454e6154049] ...
	I0318 04:40:02.149379   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d454e6154049"
	I0318 04:40:02.163282   17322 logs.go:123] Gathering logs for etcd [8046e42578d2] ...
	I0318 04:40:02.163293   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8046e42578d2"
	I0318 04:40:02.177012   17322 logs.go:123] Gathering logs for coredns [3a24458b86a4] ...
	I0318 04:40:02.180945   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a24458b86a4"
	I0318 04:40:02.192241   17322 logs.go:123] Gathering logs for kube-proxy [04d6cdf60161] ...
	I0318 04:40:02.192252   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d6cdf60161"
	I0318 04:40:02.203922   17322 logs.go:123] Gathering logs for storage-provisioner [5bfb08f2c96a] ...
	I0318 04:40:02.203933   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bfb08f2c96a"
	I0318 04:40:02.215592   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:40:02.215604   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 04:40:02.215629   17322 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0318 04:40:02.215633   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	  Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:40:02.215668   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	  Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:40:02.215675   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:40:02.215680   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:40:12.218358   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:40:17.220813   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:40:17.221017   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:40:17.235323   17322 logs.go:276] 1 containers: [d454e6154049]
	I0318 04:40:17.235404   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:40:17.251235   17322 logs.go:276] 1 containers: [8046e42578d2]
	I0318 04:40:17.251307   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:40:17.262340   17322 logs.go:276] 2 containers: [367d0316359f 3a24458b86a4]
	I0318 04:40:17.262409   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:40:17.272659   17322 logs.go:276] 1 containers: [894b6a0a0702]
	I0318 04:40:17.272731   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:40:17.286373   17322 logs.go:276] 1 containers: [04d6cdf60161]
	I0318 04:40:17.286444   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:40:17.296848   17322 logs.go:276] 1 containers: [22a920f51952]
	I0318 04:40:17.296914   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:40:17.306799   17322 logs.go:276] 0 containers: []
	W0318 04:40:17.306809   17322 logs.go:278] No container was found matching "kindnet"
	I0318 04:40:17.306869   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:40:17.317130   17322 logs.go:276] 1 containers: [5bfb08f2c96a]
	I0318 04:40:17.317145   17322 logs.go:123] Gathering logs for Docker ...
	I0318 04:40:17.317150   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:40:17.341295   17322 logs.go:123] Gathering logs for kubelet ...
	I0318 04:40:17.341305   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 04:40:17.358657   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:40:17.358752   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:40:17.376375   17322 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:40:17.376383   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:40:17.410087   17322 logs.go:123] Gathering logs for coredns [3a24458b86a4] ...
	I0318 04:40:17.410097   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a24458b86a4"
	I0318 04:40:17.421570   17322 logs.go:123] Gathering logs for kube-controller-manager [22a920f51952] ...
	I0318 04:40:17.421584   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22a920f51952"
	I0318 04:40:17.442847   17322 logs.go:123] Gathering logs for storage-provisioner [5bfb08f2c96a] ...
	I0318 04:40:17.442859   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bfb08f2c96a"
	I0318 04:40:17.457037   17322 logs.go:123] Gathering logs for kube-proxy [04d6cdf60161] ...
	I0318 04:40:17.457049   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d6cdf60161"
	I0318 04:40:17.468715   17322 logs.go:123] Gathering logs for container status ...
	I0318 04:40:17.468728   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:40:17.485232   17322 logs.go:123] Gathering logs for dmesg ...
	I0318 04:40:17.485241   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:40:17.489755   17322 logs.go:123] Gathering logs for kube-apiserver [d454e6154049] ...
	I0318 04:40:17.489763   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d454e6154049"
	I0318 04:40:17.503520   17322 logs.go:123] Gathering logs for etcd [8046e42578d2] ...
	I0318 04:40:17.503529   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8046e42578d2"
	I0318 04:40:17.518167   17322 logs.go:123] Gathering logs for coredns [367d0316359f] ...
	I0318 04:40:17.518177   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 367d0316359f"
	I0318 04:40:17.529610   17322 logs.go:123] Gathering logs for kube-scheduler [894b6a0a0702] ...
	I0318 04:40:17.529622   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 894b6a0a0702"
	I0318 04:40:17.548129   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:40:17.548140   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 04:40:17.548165   17322 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0318 04:40:17.548169   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	  Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:40:17.548174   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	  Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:40:17.548177   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:40:17.548198   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:40:27.550715   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:40:32.553217   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:40:32.553524   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:40:32.589462   17322 logs.go:276] 1 containers: [d454e6154049]
	I0318 04:40:32.589594   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:40:32.606174   17322 logs.go:276] 1 containers: [8046e42578d2]
	I0318 04:40:32.606255   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:40:32.619397   17322 logs.go:276] 4 containers: [0086537aa016 0a040eebb706 367d0316359f 3a24458b86a4]
	I0318 04:40:32.619479   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:40:32.630191   17322 logs.go:276] 1 containers: [894b6a0a0702]
	I0318 04:40:32.630262   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:40:32.640689   17322 logs.go:276] 1 containers: [04d6cdf60161]
	I0318 04:40:32.640756   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:40:32.650713   17322 logs.go:276] 1 containers: [22a920f51952]
	I0318 04:40:32.650782   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:40:32.661815   17322 logs.go:276] 0 containers: []
	W0318 04:40:32.661828   17322 logs.go:278] No container was found matching "kindnet"
	I0318 04:40:32.661886   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:40:32.673249   17322 logs.go:276] 1 containers: [5bfb08f2c96a]
	I0318 04:40:32.673267   17322 logs.go:123] Gathering logs for etcd [8046e42578d2] ...
	I0318 04:40:32.673272   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8046e42578d2"
	I0318 04:40:32.687168   17322 logs.go:123] Gathering logs for coredns [0086537aa016] ...
	I0318 04:40:32.687181   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0086537aa016"
	I0318 04:40:32.701311   17322 logs.go:123] Gathering logs for coredns [0a040eebb706] ...
	I0318 04:40:32.701323   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a040eebb706"
	I0318 04:40:32.712254   17322 logs.go:123] Gathering logs for coredns [3a24458b86a4] ...
	I0318 04:40:32.712264   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a24458b86a4"
	I0318 04:40:32.726593   17322 logs.go:123] Gathering logs for Docker ...
	I0318 04:40:32.726604   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:40:32.752244   17322 logs.go:123] Gathering logs for kubelet ...
	I0318 04:40:32.752253   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 04:40:32.771893   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:40:32.771992   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:40:32.789712   17322 logs.go:123] Gathering logs for kube-apiserver [d454e6154049] ...
	I0318 04:40:32.789731   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d454e6154049"
	I0318 04:40:32.804336   17322 logs.go:123] Gathering logs for kube-controller-manager [22a920f51952] ...
	I0318 04:40:32.804347   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22a920f51952"
	I0318 04:40:32.822545   17322 logs.go:123] Gathering logs for storage-provisioner [5bfb08f2c96a] ...
	I0318 04:40:32.822556   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bfb08f2c96a"
	I0318 04:40:32.834058   17322 logs.go:123] Gathering logs for dmesg ...
	I0318 04:40:32.834069   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:40:32.839222   17322 logs.go:123] Gathering logs for kube-scheduler [894b6a0a0702] ...
	I0318 04:40:32.839229   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 894b6a0a0702"
	I0318 04:40:32.853547   17322 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:40:32.853556   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:40:32.891455   17322 logs.go:123] Gathering logs for coredns [367d0316359f] ...
	I0318 04:40:32.891469   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 367d0316359f"
	I0318 04:40:32.903348   17322 logs.go:123] Gathering logs for kube-proxy [04d6cdf60161] ...
	I0318 04:40:32.903359   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d6cdf60161"
	I0318 04:40:32.915413   17322 logs.go:123] Gathering logs for container status ...
	I0318 04:40:32.915424   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:40:32.926837   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:40:32.926847   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 04:40:32.926873   17322 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0318 04:40:32.926877   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	  Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:40:32.926893   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	  Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:40:32.926900   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:40:32.926906   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:40:42.930709   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:40:47.932970   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:40:47.933324   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:40:47.966677   17322 logs.go:276] 1 containers: [d454e6154049]
	I0318 04:40:47.966806   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:40:47.987532   17322 logs.go:276] 1 containers: [8046e42578d2]
	I0318 04:40:47.987633   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:40:48.002381   17322 logs.go:276] 4 containers: [0086537aa016 0a040eebb706 367d0316359f 3a24458b86a4]
	I0318 04:40:48.002463   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:40:48.014793   17322 logs.go:276] 1 containers: [894b6a0a0702]
	I0318 04:40:48.014866   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:40:48.031467   17322 logs.go:276] 1 containers: [04d6cdf60161]
	I0318 04:40:48.031536   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:40:48.042748   17322 logs.go:276] 1 containers: [22a920f51952]
	I0318 04:40:48.042825   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:40:48.052822   17322 logs.go:276] 0 containers: []
	W0318 04:40:48.052833   17322 logs.go:278] No container was found matching "kindnet"
	I0318 04:40:48.052894   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:40:48.063286   17322 logs.go:276] 1 containers: [5bfb08f2c96a]
	I0318 04:40:48.063309   17322 logs.go:123] Gathering logs for coredns [0086537aa016] ...
	I0318 04:40:48.063315   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0086537aa016"
	I0318 04:40:48.075406   17322 logs.go:123] Gathering logs for coredns [0a040eebb706] ...
	I0318 04:40:48.075416   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a040eebb706"
	I0318 04:40:48.086782   17322 logs.go:123] Gathering logs for coredns [367d0316359f] ...
	I0318 04:40:48.086791   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 367d0316359f"
	I0318 04:40:48.099958   17322 logs.go:123] Gathering logs for kube-controller-manager [22a920f51952] ...
	I0318 04:40:48.099971   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22a920f51952"
	I0318 04:40:48.118191   17322 logs.go:123] Gathering logs for storage-provisioner [5bfb08f2c96a] ...
	I0318 04:40:48.118201   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bfb08f2c96a"
	I0318 04:40:48.130217   17322 logs.go:123] Gathering logs for etcd [8046e42578d2] ...
	I0318 04:40:48.130226   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8046e42578d2"
	I0318 04:40:48.145260   17322 logs.go:123] Gathering logs for Docker ...
	I0318 04:40:48.145269   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:40:48.169652   17322 logs.go:123] Gathering logs for container status ...
	I0318 04:40:48.169659   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:40:48.181721   17322 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:40:48.181731   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:40:48.217836   17322 logs.go:123] Gathering logs for kube-apiserver [d454e6154049] ...
	I0318 04:40:48.217847   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d454e6154049"
	I0318 04:40:48.232343   17322 logs.go:123] Gathering logs for kube-scheduler [894b6a0a0702] ...
	I0318 04:40:48.232352   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 894b6a0a0702"
	I0318 04:40:48.248530   17322 logs.go:123] Gathering logs for kube-proxy [04d6cdf60161] ...
	I0318 04:40:48.248538   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d6cdf60161"
	I0318 04:40:48.261552   17322 logs.go:123] Gathering logs for kubelet ...
	I0318 04:40:48.261563   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 04:40:48.279133   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:40:48.279226   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:40:48.296366   17322 logs.go:123] Gathering logs for dmesg ...
	I0318 04:40:48.296373   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:40:48.301115   17322 logs.go:123] Gathering logs for coredns [3a24458b86a4] ...
	I0318 04:40:48.301122   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a24458b86a4"
	I0318 04:40:48.313389   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:40:48.313400   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 04:40:48.313425   17322 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0318 04:40:48.313430   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	  Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:40:48.313434   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	  Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:40:48.313507   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:40:48.313524   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:40:58.317365   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:41:03.319522   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:41:03.319711   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:41:03.333138   17322 logs.go:276] 1 containers: [d454e6154049]
	I0318 04:41:03.333212   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:41:03.344395   17322 logs.go:276] 1 containers: [8046e42578d2]
	I0318 04:41:03.344475   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:41:03.355500   17322 logs.go:276] 4 containers: [0086537aa016 0a040eebb706 367d0316359f 3a24458b86a4]
	I0318 04:41:03.355575   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:41:03.365951   17322 logs.go:276] 1 containers: [894b6a0a0702]
	I0318 04:41:03.366022   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:41:03.380045   17322 logs.go:276] 1 containers: [04d6cdf60161]
	I0318 04:41:03.380114   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:41:03.391824   17322 logs.go:276] 1 containers: [22a920f51952]
	I0318 04:41:03.391906   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:41:03.401941   17322 logs.go:276] 0 containers: []
	W0318 04:41:03.401952   17322 logs.go:278] No container was found matching "kindnet"
	I0318 04:41:03.402010   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:41:03.412937   17322 logs.go:276] 1 containers: [5bfb08f2c96a]
	I0318 04:41:03.412952   17322 logs.go:123] Gathering logs for Docker ...
	I0318 04:41:03.412957   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:41:03.436911   17322 logs.go:123] Gathering logs for dmesg ...
	I0318 04:41:03.436919   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:41:03.441094   17322 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:41:03.441104   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:41:03.477829   17322 logs.go:123] Gathering logs for kube-apiserver [d454e6154049] ...
	I0318 04:41:03.477841   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d454e6154049"
	I0318 04:41:03.494518   17322 logs.go:123] Gathering logs for etcd [8046e42578d2] ...
	I0318 04:41:03.494531   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8046e42578d2"
	I0318 04:41:03.510374   17322 logs.go:123] Gathering logs for coredns [0086537aa016] ...
	I0318 04:41:03.510385   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0086537aa016"
	I0318 04:41:03.521842   17322 logs.go:123] Gathering logs for coredns [0a040eebb706] ...
	I0318 04:41:03.521853   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a040eebb706"
	I0318 04:41:03.533929   17322 logs.go:123] Gathering logs for coredns [367d0316359f] ...
	I0318 04:41:03.533944   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 367d0316359f"
	I0318 04:41:03.546045   17322 logs.go:123] Gathering logs for kube-proxy [04d6cdf60161] ...
	I0318 04:41:03.546056   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d6cdf60161"
	I0318 04:41:03.558666   17322 logs.go:123] Gathering logs for kube-controller-manager [22a920f51952] ...
	I0318 04:41:03.558677   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22a920f51952"
	I0318 04:41:03.576808   17322 logs.go:123] Gathering logs for container status ...
	I0318 04:41:03.576818   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:41:03.588548   17322 logs.go:123] Gathering logs for kubelet ...
	I0318 04:41:03.588559   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 04:41:03.606669   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:41:03.606763   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:41:03.624223   17322 logs.go:123] Gathering logs for coredns [3a24458b86a4] ...
	I0318 04:41:03.624230   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a24458b86a4"
	I0318 04:41:03.635980   17322 logs.go:123] Gathering logs for kube-scheduler [894b6a0a0702] ...
	I0318 04:41:03.635991   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 894b6a0a0702"
	I0318 04:41:03.650622   17322 logs.go:123] Gathering logs for storage-provisioner [5bfb08f2c96a] ...
	I0318 04:41:03.650631   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bfb08f2c96a"
	I0318 04:41:03.662656   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:41:03.662665   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 04:41:03.662693   17322 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0318 04:41:03.662697   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	  Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:41:03.662702   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	  Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:41:03.662706   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:41:03.662709   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:41:13.666628   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:41:18.668744   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:41:18.668823   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:41:18.679008   17322 logs.go:276] 1 containers: [d454e6154049]
	I0318 04:41:18.679086   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:41:18.690075   17322 logs.go:276] 1 containers: [8046e42578d2]
	I0318 04:41:18.690143   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:41:18.700372   17322 logs.go:276] 4 containers: [0086537aa016 0a040eebb706 367d0316359f 3a24458b86a4]
	I0318 04:41:18.700453   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:41:18.712143   17322 logs.go:276] 1 containers: [894b6a0a0702]
	I0318 04:41:18.712219   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:41:18.722789   17322 logs.go:276] 1 containers: [04d6cdf60161]
	I0318 04:41:18.722867   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:41:18.732850   17322 logs.go:276] 1 containers: [22a920f51952]
	I0318 04:41:18.732920   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:41:18.742759   17322 logs.go:276] 0 containers: []
	W0318 04:41:18.742770   17322 logs.go:278] No container was found matching "kindnet"
	I0318 04:41:18.742829   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:41:18.755277   17322 logs.go:276] 1 containers: [5bfb08f2c96a]
	I0318 04:41:18.755295   17322 logs.go:123] Gathering logs for kube-scheduler [894b6a0a0702] ...
	I0318 04:41:18.755300   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 894b6a0a0702"
	I0318 04:41:18.770233   17322 logs.go:123] Gathering logs for kube-apiserver [d454e6154049] ...
	I0318 04:41:18.770246   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d454e6154049"
	I0318 04:41:18.784476   17322 logs.go:123] Gathering logs for coredns [0a040eebb706] ...
	I0318 04:41:18.784488   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a040eebb706"
	I0318 04:41:18.796201   17322 logs.go:123] Gathering logs for Docker ...
	I0318 04:41:18.796211   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:41:18.821894   17322 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:41:18.821909   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:41:18.856326   17322 logs.go:123] Gathering logs for coredns [367d0316359f] ...
	I0318 04:41:18.856338   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 367d0316359f"
	I0318 04:41:18.868188   17322 logs.go:123] Gathering logs for etcd [8046e42578d2] ...
	I0318 04:41:18.868201   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8046e42578d2"
	I0318 04:41:18.886779   17322 logs.go:123] Gathering logs for coredns [0086537aa016] ...
	I0318 04:41:18.886789   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0086537aa016"
	I0318 04:41:18.902502   17322 logs.go:123] Gathering logs for coredns [3a24458b86a4] ...
	I0318 04:41:18.902518   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a24458b86a4"
	I0318 04:41:18.913849   17322 logs.go:123] Gathering logs for kube-proxy [04d6cdf60161] ...
	I0318 04:41:18.913864   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d6cdf60161"
	I0318 04:41:18.925487   17322 logs.go:123] Gathering logs for kube-controller-manager [22a920f51952] ...
	I0318 04:41:18.925501   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22a920f51952"
	I0318 04:41:18.943169   17322 logs.go:123] Gathering logs for storage-provisioner [5bfb08f2c96a] ...
	I0318 04:41:18.943182   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bfb08f2c96a"
	I0318 04:41:18.974399   17322 logs.go:123] Gathering logs for kubelet ...
	I0318 04:41:18.974409   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 04:41:18.990687   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:41:18.990780   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:41:19.008322   17322 logs.go:123] Gathering logs for dmesg ...
	I0318 04:41:19.008328   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:41:19.013088   17322 logs.go:123] Gathering logs for container status ...
	I0318 04:41:19.013100   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:41:19.024142   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:41:19.024154   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 04:41:19.024182   17322 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0318 04:41:19.024186   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	  Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:41:19.024190   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	  Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:41:19.024195   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:41:19.024197   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:41:29.028091   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:41:34.030250   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:41:34.030743   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:41:34.076759   17322 logs.go:276] 1 containers: [d454e6154049]
	I0318 04:41:34.076893   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:41:34.099859   17322 logs.go:276] 1 containers: [8046e42578d2]
	I0318 04:41:34.099952   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:41:34.115763   17322 logs.go:276] 4 containers: [0086537aa016 0a040eebb706 367d0316359f 3a24458b86a4]
	I0318 04:41:34.115852   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:41:34.133753   17322 logs.go:276] 1 containers: [894b6a0a0702]
	I0318 04:41:34.133832   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:41:34.146109   17322 logs.go:276] 1 containers: [04d6cdf60161]
	I0318 04:41:34.146180   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:41:34.157510   17322 logs.go:276] 1 containers: [22a920f51952]
	I0318 04:41:34.157587   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:41:34.168961   17322 logs.go:276] 0 containers: []
	W0318 04:41:34.168972   17322 logs.go:278] No container was found matching "kindnet"
	I0318 04:41:34.169031   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:41:34.180810   17322 logs.go:276] 1 containers: [5bfb08f2c96a]
	I0318 04:41:34.180829   17322 logs.go:123] Gathering logs for dmesg ...
	I0318 04:41:34.180835   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:41:34.185350   17322 logs.go:123] Gathering logs for coredns [0a040eebb706] ...
	I0318 04:41:34.185358   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a040eebb706"
	I0318 04:41:34.197437   17322 logs.go:123] Gathering logs for kube-proxy [04d6cdf60161] ...
	I0318 04:41:34.197446   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d6cdf60161"
	I0318 04:41:34.221843   17322 logs.go:123] Gathering logs for coredns [0086537aa016] ...
	I0318 04:41:34.221857   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0086537aa016"
	I0318 04:41:34.235659   17322 logs.go:123] Gathering logs for coredns [3a24458b86a4] ...
	I0318 04:41:34.235670   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a24458b86a4"
	I0318 04:41:34.251102   17322 logs.go:123] Gathering logs for kube-scheduler [894b6a0a0702] ...
	I0318 04:41:34.251114   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 894b6a0a0702"
	I0318 04:41:34.267266   17322 logs.go:123] Gathering logs for storage-provisioner [5bfb08f2c96a] ...
	I0318 04:41:34.267277   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bfb08f2c96a"
	I0318 04:41:34.281431   17322 logs.go:123] Gathering logs for kube-controller-manager [22a920f51952] ...
	I0318 04:41:34.281443   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22a920f51952"
	I0318 04:41:34.299877   17322 logs.go:123] Gathering logs for Docker ...
	I0318 04:41:34.299895   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:41:34.326637   17322 logs.go:123] Gathering logs for container status ...
	I0318 04:41:34.326652   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:41:34.339937   17322 logs.go:123] Gathering logs for kubelet ...
	I0318 04:41:34.339950   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 04:41:34.359367   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:41:34.359468   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:41:34.377511   17322 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:41:34.377534   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:41:34.414308   17322 logs.go:123] Gathering logs for kube-apiserver [d454e6154049] ...
	I0318 04:41:34.414318   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d454e6154049"
	I0318 04:41:34.432250   17322 logs.go:123] Gathering logs for etcd [8046e42578d2] ...
	I0318 04:41:34.432262   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8046e42578d2"
	I0318 04:41:34.448196   17322 logs.go:123] Gathering logs for coredns [367d0316359f] ...
	I0318 04:41:34.448209   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 367d0316359f"
	I0318 04:41:34.461418   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:41:34.461429   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 04:41:34.461456   17322 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0318 04:41:34.461460   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	  Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:41:34.461464   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	  Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:41:34.461470   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:41:34.461473   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:41:44.464826   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:41:49.466167   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:41:49.466302   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:41:49.479359   17322 logs.go:276] 1 containers: [d454e6154049]
	I0318 04:41:49.479439   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:41:49.490107   17322 logs.go:276] 1 containers: [8046e42578d2]
	I0318 04:41:49.490172   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:41:49.500590   17322 logs.go:276] 4 containers: [0086537aa016 0a040eebb706 367d0316359f 3a24458b86a4]
	I0318 04:41:49.500665   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:41:49.512833   17322 logs.go:276] 1 containers: [894b6a0a0702]
	I0318 04:41:49.512902   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:41:49.523016   17322 logs.go:276] 1 containers: [04d6cdf60161]
	I0318 04:41:49.523083   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:41:49.533746   17322 logs.go:276] 1 containers: [22a920f51952]
	I0318 04:41:49.533806   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:41:49.543622   17322 logs.go:276] 0 containers: []
	W0318 04:41:49.543635   17322 logs.go:278] No container was found matching "kindnet"
	I0318 04:41:49.543695   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:41:49.554550   17322 logs.go:276] 1 containers: [5bfb08f2c96a]
	I0318 04:41:49.554568   17322 logs.go:123] Gathering logs for kube-apiserver [d454e6154049] ...
	I0318 04:41:49.554574   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d454e6154049"
	I0318 04:41:49.568491   17322 logs.go:123] Gathering logs for etcd [8046e42578d2] ...
	I0318 04:41:49.568502   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8046e42578d2"
	I0318 04:41:49.582393   17322 logs.go:123] Gathering logs for coredns [0a040eebb706] ...
	I0318 04:41:49.582404   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a040eebb706"
	I0318 04:41:49.594344   17322 logs.go:123] Gathering logs for container status ...
	I0318 04:41:49.594354   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:41:49.607291   17322 logs.go:123] Gathering logs for kubelet ...
	I0318 04:41:49.607301   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 04:41:49.625131   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:41:49.625224   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:41:49.642395   17322 logs.go:123] Gathering logs for coredns [0086537aa016] ...
	I0318 04:41:49.642402   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0086537aa016"
	I0318 04:41:49.653576   17322 logs.go:123] Gathering logs for kube-scheduler [894b6a0a0702] ...
	I0318 04:41:49.653586   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 894b6a0a0702"
	I0318 04:41:49.667879   17322 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:41:49.667889   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:41:49.701806   17322 logs.go:123] Gathering logs for kube-proxy [04d6cdf60161] ...
	I0318 04:41:49.701817   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d6cdf60161"
	I0318 04:41:49.713960   17322 logs.go:123] Gathering logs for kube-controller-manager [22a920f51952] ...
	I0318 04:41:49.713969   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22a920f51952"
	I0318 04:41:49.731558   17322 logs.go:123] Gathering logs for storage-provisioner [5bfb08f2c96a] ...
	I0318 04:41:49.731569   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bfb08f2c96a"
	I0318 04:41:49.743144   17322 logs.go:123] Gathering logs for dmesg ...
	I0318 04:41:49.743153   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:41:49.747898   17322 logs.go:123] Gathering logs for coredns [367d0316359f] ...
	I0318 04:41:49.747907   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 367d0316359f"
	I0318 04:41:49.759600   17322 logs.go:123] Gathering logs for coredns [3a24458b86a4] ...
	I0318 04:41:49.759609   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a24458b86a4"
	I0318 04:41:49.772049   17322 logs.go:123] Gathering logs for Docker ...
	I0318 04:41:49.772060   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:41:49.795615   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:41:49.795623   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 04:41:49.795656   17322 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0318 04:41:49.795665   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	  Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:41:49.795670   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	  Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:41:49.795678   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:41:49.795682   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:41:59.798278   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:42:04.800519   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:42:04.801017   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:42:04.844485   17322 logs.go:276] 1 containers: [d454e6154049]
	I0318 04:42:04.844634   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:42:04.865373   17322 logs.go:276] 1 containers: [8046e42578d2]
	I0318 04:42:04.865469   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:42:04.883869   17322 logs.go:276] 4 containers: [0086537aa016 0a040eebb706 367d0316359f 3a24458b86a4]
	I0318 04:42:04.883955   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:42:04.895280   17322 logs.go:276] 1 containers: [894b6a0a0702]
	I0318 04:42:04.895344   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:42:04.906265   17322 logs.go:276] 1 containers: [04d6cdf60161]
	I0318 04:42:04.906332   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:42:04.917447   17322 logs.go:276] 1 containers: [22a920f51952]
	I0318 04:42:04.917507   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:42:04.928538   17322 logs.go:276] 0 containers: []
	W0318 04:42:04.928549   17322 logs.go:278] No container was found matching "kindnet"
	I0318 04:42:04.928609   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:42:04.939543   17322 logs.go:276] 1 containers: [5bfb08f2c96a]
	I0318 04:42:04.939561   17322 logs.go:123] Gathering logs for container status ...
	I0318 04:42:04.939567   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:42:04.950985   17322 logs.go:123] Gathering logs for kube-apiserver [d454e6154049] ...
	I0318 04:42:04.950997   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d454e6154049"
	I0318 04:42:04.966394   17322 logs.go:123] Gathering logs for storage-provisioner [5bfb08f2c96a] ...
	I0318 04:42:04.966406   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bfb08f2c96a"
	I0318 04:42:04.977962   17322 logs.go:123] Gathering logs for Docker ...
	I0318 04:42:04.977972   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:42:05.001285   17322 logs.go:123] Gathering logs for dmesg ...
	I0318 04:42:05.001307   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:42:05.006121   17322 logs.go:123] Gathering logs for kube-controller-manager [22a920f51952] ...
	I0318 04:42:05.006129   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22a920f51952"
	I0318 04:42:05.023719   17322 logs.go:123] Gathering logs for kube-proxy [04d6cdf60161] ...
	I0318 04:42:05.023729   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d6cdf60161"
	I0318 04:42:05.035788   17322 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:42:05.035798   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:42:05.072806   17322 logs.go:123] Gathering logs for etcd [8046e42578d2] ...
	I0318 04:42:05.072816   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8046e42578d2"
	I0318 04:42:05.087401   17322 logs.go:123] Gathering logs for coredns [0086537aa016] ...
	I0318 04:42:05.087411   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0086537aa016"
	I0318 04:42:05.100172   17322 logs.go:123] Gathering logs for coredns [3a24458b86a4] ...
	I0318 04:42:05.100184   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a24458b86a4"
	I0318 04:42:05.116341   17322 logs.go:123] Gathering logs for kube-scheduler [894b6a0a0702] ...
	I0318 04:42:05.116352   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 894b6a0a0702"
	I0318 04:42:05.130935   17322 logs.go:123] Gathering logs for kubelet ...
	I0318 04:42:05.130945   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 04:42:05.148882   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:42:05.148975   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:42:05.166489   17322 logs.go:123] Gathering logs for coredns [0a040eebb706] ...
	I0318 04:42:05.166494   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a040eebb706"
	I0318 04:42:05.178660   17322 logs.go:123] Gathering logs for coredns [367d0316359f] ...
	I0318 04:42:05.178671   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 367d0316359f"
	I0318 04:42:05.190930   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:42:05.190941   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 04:42:05.190970   17322 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0318 04:42:05.190974   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	  Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:42:05.190978   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	  Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:42:05.190982   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:42:05.190985   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:42:15.194796   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:42:20.196962   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:42:20.197122   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:42:20.214918   17322 logs.go:276] 1 containers: [d454e6154049]
	I0318 04:42:20.215000   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:42:20.227157   17322 logs.go:276] 1 containers: [8046e42578d2]
	I0318 04:42:20.227232   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:42:20.237931   17322 logs.go:276] 4 containers: [0086537aa016 0a040eebb706 367d0316359f 3a24458b86a4]
	I0318 04:42:20.238006   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:42:20.248398   17322 logs.go:276] 1 containers: [894b6a0a0702]
	I0318 04:42:20.248467   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:42:20.259204   17322 logs.go:276] 1 containers: [04d6cdf60161]
	I0318 04:42:20.259272   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:42:20.270402   17322 logs.go:276] 1 containers: [22a920f51952]
	I0318 04:42:20.270477   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:42:20.281136   17322 logs.go:276] 0 containers: []
	W0318 04:42:20.281147   17322 logs.go:278] No container was found matching "kindnet"
	I0318 04:42:20.281205   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:42:20.291395   17322 logs.go:276] 1 containers: [5bfb08f2c96a]
	I0318 04:42:20.291418   17322 logs.go:123] Gathering logs for kubelet ...
	I0318 04:42:20.291423   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 04:42:20.307370   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:42:20.307463   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:42:20.324758   17322 logs.go:123] Gathering logs for kube-apiserver [d454e6154049] ...
	I0318 04:42:20.324764   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d454e6154049"
	I0318 04:42:20.339437   17322 logs.go:123] Gathering logs for etcd [8046e42578d2] ...
	I0318 04:42:20.339448   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8046e42578d2"
	I0318 04:42:20.352921   17322 logs.go:123] Gathering logs for coredns [367d0316359f] ...
	I0318 04:42:20.352931   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 367d0316359f"
	I0318 04:42:20.366570   17322 logs.go:123] Gathering logs for storage-provisioner [5bfb08f2c96a] ...
	I0318 04:42:20.366584   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bfb08f2c96a"
	I0318 04:42:20.378923   17322 logs.go:123] Gathering logs for Docker ...
	I0318 04:42:20.378935   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:42:20.402166   17322 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:42:20.402183   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:42:20.457772   17322 logs.go:123] Gathering logs for kube-scheduler [894b6a0a0702] ...
	I0318 04:42:20.457785   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 894b6a0a0702"
	I0318 04:42:20.473081   17322 logs.go:123] Gathering logs for kube-proxy [04d6cdf60161] ...
	I0318 04:42:20.473093   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d6cdf60161"
	I0318 04:42:20.485392   17322 logs.go:123] Gathering logs for coredns [3a24458b86a4] ...
	I0318 04:42:20.485404   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a24458b86a4"
	I0318 04:42:20.496966   17322 logs.go:123] Gathering logs for container status ...
	I0318 04:42:20.496976   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:42:20.508242   17322 logs.go:123] Gathering logs for dmesg ...
	I0318 04:42:20.508251   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:42:20.512820   17322 logs.go:123] Gathering logs for coredns [0086537aa016] ...
	I0318 04:42:20.512829   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0086537aa016"
	I0318 04:42:20.525034   17322 logs.go:123] Gathering logs for coredns [0a040eebb706] ...
	I0318 04:42:20.525044   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a040eebb706"
	I0318 04:42:20.536579   17322 logs.go:123] Gathering logs for kube-controller-manager [22a920f51952] ...
	I0318 04:42:20.536588   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22a920f51952"
	I0318 04:42:20.553618   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:42:20.553628   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 04:42:20.553652   17322 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0318 04:42:20.553656   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	  Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:42:20.553660   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	  Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:42:20.553664   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:42:20.553667   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:42:30.556389   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:42:35.558471   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:42:35.562153   17322 out.go:177] 
	W0318 04:42:35.567783   17322 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0318 04:42:35.567799   17322 out.go:239] * 
	* 
	W0318 04:42:35.568924   17322 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:42:35.579872   17322 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-738000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-03-18 04:42:35.655109 -0700 PDT m=+1436.556176584
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-738000 -n running-upgrade-738000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-738000 -n running-upgrade-738000: exit status 2 (15.636470417s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-738000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| start   | -p force-systemd-flag-517000          | force-systemd-flag-517000 | jenkins | v1.32.0 | 18 Mar 24 04:32 PDT |                     |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-191000              | force-systemd-env-191000  | jenkins | v1.32.0 | 18 Mar 24 04:32 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-191000           | force-systemd-env-191000  | jenkins | v1.32.0 | 18 Mar 24 04:32 PDT | 18 Mar 24 04:32 PDT |
	| start   | -p docker-flags-569000                | docker-flags-569000       | jenkins | v1.32.0 | 18 Mar 24 04:32 PDT |                     |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-517000             | force-systemd-flag-517000 | jenkins | v1.32.0 | 18 Mar 24 04:32 PDT |                     |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-517000          | force-systemd-flag-517000 | jenkins | v1.32.0 | 18 Mar 24 04:32 PDT | 18 Mar 24 04:32 PDT |
	| start   | -p cert-expiration-548000             | cert-expiration-548000    | jenkins | v1.32.0 | 18 Mar 24 04:32 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | docker-flags-569000 ssh               | docker-flags-569000       | jenkins | v1.32.0 | 18 Mar 24 04:32 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-569000 ssh               | docker-flags-569000       | jenkins | v1.32.0 | 18 Mar 24 04:32 PDT |                     |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-569000                | docker-flags-569000       | jenkins | v1.32.0 | 18 Mar 24 04:32 PDT | 18 Mar 24 04:32 PDT |
	| start   | -p cert-options-834000                | cert-options-834000       | jenkins | v1.32.0 | 18 Mar 24 04:32 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| ssh     | cert-options-834000 ssh               | cert-options-834000       | jenkins | v1.32.0 | 18 Mar 24 04:32 PDT |                     |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-834000 -- sudo        | cert-options-834000       | jenkins | v1.32.0 | 18 Mar 24 04:32 PDT |                     |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-834000                | cert-options-834000       | jenkins | v1.32.0 | 18 Mar 24 04:32 PDT | 18 Mar 24 04:32 PDT |
	| start   | -p running-upgrade-738000             | minikube                  | jenkins | v1.26.0 | 18 Mar 24 04:32 PDT | 18 Mar 24 04:34 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| start   | -p running-upgrade-738000             | running-upgrade-738000    | jenkins | v1.32.0 | 18 Mar 24 04:34 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| start   | -p cert-expiration-548000             | cert-expiration-548000    | jenkins | v1.32.0 | 18 Mar 24 04:35 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-548000             | cert-expiration-548000    | jenkins | v1.32.0 | 18 Mar 24 04:35 PDT | 18 Mar 24 04:35 PDT |
	| start   | -p kubernetes-upgrade-311000          | kubernetes-upgrade-311000 | jenkins | v1.32.0 | 18 Mar 24 04:35 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-311000          | kubernetes-upgrade-311000 | jenkins | v1.32.0 | 18 Mar 24 04:35 PDT | 18 Mar 24 04:35 PDT |
	| start   | -p kubernetes-upgrade-311000          | kubernetes-upgrade-311000 | jenkins | v1.32.0 | 18 Mar 24 04:35 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2     |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-311000          | kubernetes-upgrade-311000 | jenkins | v1.32.0 | 18 Mar 24 04:35 PDT | 18 Mar 24 04:35 PDT |
	| start   | -p stopped-upgrade-126000             | minikube                  | jenkins | v1.26.0 | 18 Mar 24 04:36 PDT | 18 Mar 24 04:36 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                     |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-126000 stop           | minikube                  | jenkins | v1.26.0 | 18 Mar 24 04:36 PDT | 18 Mar 24 04:36 PDT |
	| start   | -p stopped-upgrade-126000             | stopped-upgrade-126000    | jenkins | v1.32.0 | 18 Mar 24 04:36 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=qemu2                        |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 04:36:59
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 04:36:59.712926   17465 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:36:59.713080   17465 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:36:59.713085   17465 out.go:304] Setting ErrFile to fd 2...
	I0318 04:36:59.713088   17465 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:36:59.713246   17465 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:36:59.714483   17465 out.go:298] Setting JSON to false
	I0318 04:36:59.734356   17465 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":9392,"bootTime":1710752427,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:36:59.734436   17465 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:36:59.739513   17465 out.go:177] * [stopped-upgrade-126000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:36:59.751626   17465 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 04:36:59.747589   17465 notify.go:220] Checking for updates...
	I0318 04:36:59.759471   17465 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:36:59.765014   17465 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:36:59.768522   17465 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:36:59.771554   17465 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	I0318 04:36:59.774523   17465 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:36:59.777834   17465 config.go:182] Loaded profile config "stopped-upgrade-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 04:36:59.782540   17465 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0318 04:36:59.785425   17465 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:36:59.789484   17465 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 04:36:59.795471   17465 start.go:297] selected driver: qemu2
	I0318 04:36:59.795477   17465 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-126000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53534 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-126000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0318 04:36:59.795534   17465 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:36:59.798309   17465 cni.go:84] Creating CNI manager for ""
	I0318 04:36:59.798329   17465 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:36:59.798369   17465 start.go:340] cluster config:
	{Name:stopped-upgrade-126000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53534 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-126000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0318 04:36:59.798441   17465 iso.go:125] acquiring lock: {Name:mkb8143674083e0c7a46a3ed751b3800392bcd24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:36:59.807515   17465 out.go:177] * Starting "stopped-upgrade-126000" primary control-plane node in "stopped-upgrade-126000" cluster
	I0318 04:36:59.811506   17465 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0318 04:36:59.811521   17465 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0318 04:36:59.811531   17465 cache.go:56] Caching tarball of preloaded images
	I0318 04:36:59.811584   17465 preload.go:173] Found /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:36:59.811590   17465 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0318 04:36:59.811643   17465 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/stopped-upgrade-126000/config.json ...
	I0318 04:36:59.812177   17465 start.go:360] acquireMachinesLock for stopped-upgrade-126000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:36:59.812213   17465 start.go:364] duration metric: took 29.791µs to acquireMachinesLock for "stopped-upgrade-126000"
	I0318 04:36:59.812224   17465 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:36:59.812229   17465 fix.go:54] fixHost starting: 
	I0318 04:36:59.812345   17465 fix.go:112] recreateIfNeeded on stopped-upgrade-126000: state=Stopped err=<nil>
	W0318 04:36:59.812354   17465 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:36:59.820509   17465 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-126000" ...
	I0318 04:37:01.236065   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:36:59.824582   17465 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/stopped-upgrade-126000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/stopped-upgrade-126000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/stopped-upgrade-126000/qemu.pid -nic user,model=virtio,hostfwd=tcp::53501-:22,hostfwd=tcp::53502-:2376,hostname=stopped-upgrade-126000 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/stopped-upgrade-126000/disk.qcow2
	I0318 04:36:59.872693   17465 main.go:141] libmachine: STDOUT: 
	I0318 04:36:59.872723   17465 main.go:141] libmachine: STDERR: 
	I0318 04:36:59.872731   17465 main.go:141] libmachine: Waiting for VM to start (ssh -p 53501 docker@127.0.0.1)...
	I0318 04:37:06.238540   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:37:06.238711   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:37:06.250377   17322 logs.go:276] 2 containers: [8cbe0799ab57 7147d93a4ffc]
	I0318 04:37:06.250465   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:37:06.262400   17322 logs.go:276] 2 containers: [3597b574be66 c52784390773]
	I0318 04:37:06.262493   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:37:06.273174   17322 logs.go:276] 1 containers: [9445dc83224c]
	I0318 04:37:06.273246   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:37:06.283707   17322 logs.go:276] 2 containers: [083e5435c9c9 2ed27f531543]
	I0318 04:37:06.283786   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:37:06.298365   17322 logs.go:276] 1 containers: [313af99b8193]
	I0318 04:37:06.298442   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:37:06.308876   17322 logs.go:276] 2 containers: [c9b5a8296878 eca65001ee00]
	I0318 04:37:06.308942   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:37:06.319551   17322 logs.go:276] 0 containers: []
	W0318 04:37:06.319562   17322 logs.go:278] No container was found matching "kindnet"
	I0318 04:37:06.319621   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:37:06.330090   17322 logs.go:276] 2 containers: [1fb3e049cd41 f4f9d7351a87]
	I0318 04:37:06.330109   17322 logs.go:123] Gathering logs for kube-controller-manager [eca65001ee00] ...
	I0318 04:37:06.330115   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca65001ee00"
	I0318 04:37:06.342039   17322 logs.go:123] Gathering logs for storage-provisioner [1fb3e049cd41] ...
	I0318 04:37:06.342048   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb3e049cd41"
	I0318 04:37:06.353519   17322 logs.go:123] Gathering logs for storage-provisioner [f4f9d7351a87] ...
	I0318 04:37:06.353528   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4f9d7351a87"
	I0318 04:37:06.364887   17322 logs.go:123] Gathering logs for container status ...
	I0318 04:37:06.364898   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:37:06.379888   17322 logs.go:123] Gathering logs for kube-apiserver [8cbe0799ab57] ...
	I0318 04:37:06.379900   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cbe0799ab57"
	I0318 04:37:06.394328   17322 logs.go:123] Gathering logs for etcd [3597b574be66] ...
	I0318 04:37:06.394341   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3597b574be66"
	I0318 04:37:06.408456   17322 logs.go:123] Gathering logs for etcd [c52784390773] ...
	I0318 04:37:06.408467   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52784390773"
	I0318 04:37:06.426467   17322 logs.go:123] Gathering logs for coredns [9445dc83224c] ...
	I0318 04:37:06.426478   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9445dc83224c"
	I0318 04:37:06.437893   17322 logs.go:123] Gathering logs for kube-scheduler [2ed27f531543] ...
	I0318 04:37:06.437904   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed27f531543"
	I0318 04:37:06.452568   17322 logs.go:123] Gathering logs for kube-proxy [313af99b8193] ...
	I0318 04:37:06.452580   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313af99b8193"
	I0318 04:37:06.474783   17322 logs.go:123] Gathering logs for kubelet ...
	I0318 04:37:06.474795   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 04:37:06.512699   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:37:06.512792   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:37:06.514490   17322 logs.go:123] Gathering logs for dmesg ...
	I0318 04:37:06.514499   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:37:06.518628   17322 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:37:06.518637   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:37:06.554459   17322 logs.go:123] Gathering logs for kube-scheduler [083e5435c9c9] ...
	I0318 04:37:06.554468   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083e5435c9c9"
	I0318 04:37:06.570709   17322 logs.go:123] Gathering logs for kube-apiserver [7147d93a4ffc] ...
	I0318 04:37:06.570719   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7147d93a4ffc"
	I0318 04:37:06.600141   17322 logs.go:123] Gathering logs for Docker ...
	I0318 04:37:06.600151   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:37:06.624001   17322 logs.go:123] Gathering logs for kube-controller-manager [c9b5a8296878] ...
	I0318 04:37:06.624013   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b5a8296878"
	I0318 04:37:06.641560   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:37:06.641570   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 04:37:06.641603   17322 out.go:239] X Problems detected in kubelet:
	W0318 04:37:06.641608   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:37:06.641614   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:37:06.641620   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:37:06.641623   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:37:16.645451   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:37:21.647541   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:37:21.647643   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:37:21.659480   17322 logs.go:276] 2 containers: [8cbe0799ab57 7147d93a4ffc]
	I0318 04:37:21.659549   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:37:21.671142   17322 logs.go:276] 2 containers: [3597b574be66 c52784390773]
	I0318 04:37:21.671212   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:37:21.682066   17322 logs.go:276] 1 containers: [9445dc83224c]
	I0318 04:37:21.682138   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:37:21.695307   17322 logs.go:276] 2 containers: [083e5435c9c9 2ed27f531543]
	I0318 04:37:21.695376   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:37:21.706686   17322 logs.go:276] 1 containers: [313af99b8193]
	I0318 04:37:21.706778   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:37:21.717958   17322 logs.go:276] 2 containers: [c9b5a8296878 eca65001ee00]
	I0318 04:37:21.718029   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:37:21.728237   17322 logs.go:276] 0 containers: []
	W0318 04:37:21.728247   17322 logs.go:278] No container was found matching "kindnet"
	I0318 04:37:21.728303   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:37:21.739135   17322 logs.go:276] 2 containers: [1fb3e049cd41 f4f9d7351a87]
	I0318 04:37:21.739179   17322 logs.go:123] Gathering logs for etcd [3597b574be66] ...
	I0318 04:37:21.739190   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3597b574be66"
	I0318 04:37:21.753235   17322 logs.go:123] Gathering logs for coredns [9445dc83224c] ...
	I0318 04:37:21.753249   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9445dc83224c"
	I0318 04:37:21.764401   17322 logs.go:123] Gathering logs for kube-proxy [313af99b8193] ...
	I0318 04:37:21.764410   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313af99b8193"
	I0318 04:37:21.778427   17322 logs.go:123] Gathering logs for kube-controller-manager [eca65001ee00] ...
	I0318 04:37:21.778435   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca65001ee00"
	I0318 04:37:21.789670   17322 logs.go:123] Gathering logs for dmesg ...
	I0318 04:37:21.789680   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:37:21.793875   17322 logs.go:123] Gathering logs for kube-apiserver [8cbe0799ab57] ...
	I0318 04:37:21.793881   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cbe0799ab57"
	I0318 04:37:21.810245   17322 logs.go:123] Gathering logs for kube-scheduler [083e5435c9c9] ...
	I0318 04:37:21.810256   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083e5435c9c9"
	I0318 04:37:21.822907   17322 logs.go:123] Gathering logs for storage-provisioner [1fb3e049cd41] ...
	I0318 04:37:21.822922   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb3e049cd41"
	I0318 04:37:21.834394   17322 logs.go:123] Gathering logs for Docker ...
	I0318 04:37:21.834410   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:37:21.857343   17322 logs.go:123] Gathering logs for container status ...
	I0318 04:37:21.857351   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:37:21.869015   17322 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:37:21.869026   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:37:21.907930   17322 logs.go:123] Gathering logs for kube-apiserver [7147d93a4ffc] ...
	I0318 04:37:21.907944   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7147d93a4ffc"
	I0318 04:37:21.933596   17322 logs.go:123] Gathering logs for storage-provisioner [f4f9d7351a87] ...
	I0318 04:37:21.933606   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4f9d7351a87"
	I0318 04:37:21.944424   17322 logs.go:123] Gathering logs for kubelet ...
	I0318 04:37:21.944436   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 04:37:21.982692   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:37:21.982792   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:37:21.984440   17322 logs.go:123] Gathering logs for kube-controller-manager [c9b5a8296878] ...
	I0318 04:37:21.984446   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b5a8296878"
	I0318 04:37:22.004525   17322 logs.go:123] Gathering logs for etcd [c52784390773] ...
	I0318 04:37:22.004537   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52784390773"
	I0318 04:37:22.018670   17322 logs.go:123] Gathering logs for kube-scheduler [2ed27f531543] ...
	I0318 04:37:22.018679   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed27f531543"
	I0318 04:37:22.032847   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:37:22.032860   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 04:37:22.032886   17322 out.go:239] X Problems detected in kubelet:
	W0318 04:37:22.032889   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:37:22.032894   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:37:22.032940   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:37:22.032946   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:37:20.214907   17465 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/stopped-upgrade-126000/config.json ...
	I0318 04:37:20.215739   17465 machine.go:94] provisionDockerMachine start ...
	I0318 04:37:20.215976   17465 main.go:141] libmachine: Using SSH client type: native
	I0318 04:37:20.216470   17465 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a71bf0] 0x102a74450 <nil>  [] 0s} localhost 53501 <nil> <nil>}
	I0318 04:37:20.216486   17465 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 04:37:20.293213   17465 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 04:37:20.293246   17465 buildroot.go:166] provisioning hostname "stopped-upgrade-126000"
	I0318 04:37:20.293439   17465 main.go:141] libmachine: Using SSH client type: native
	I0318 04:37:20.293695   17465 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a71bf0] 0x102a74450 <nil>  [] 0s} localhost 53501 <nil> <nil>}
	I0318 04:37:20.293706   17465 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-126000 && echo "stopped-upgrade-126000" | sudo tee /etc/hostname
	I0318 04:37:20.366969   17465 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-126000
	
	I0318 04:37:20.367100   17465 main.go:141] libmachine: Using SSH client type: native
	I0318 04:37:20.367274   17465 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a71bf0] 0x102a74450 <nil>  [] 0s} localhost 53501 <nil> <nil>}
	I0318 04:37:20.367289   17465 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-126000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-126000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-126000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 04:37:20.430960   17465 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 04:37:20.430975   17465 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18429-15072/.minikube CaCertPath:/Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18429-15072/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18429-15072/.minikube}
	I0318 04:37:20.430983   17465 buildroot.go:174] setting up certificates
	I0318 04:37:20.430995   17465 provision.go:84] configureAuth start
	I0318 04:37:20.431000   17465 provision.go:143] copyHostCerts
	I0318 04:37:20.431077   17465 exec_runner.go:144] found /Users/jenkins/minikube-integration/18429-15072/.minikube/ca.pem, removing ...
	I0318 04:37:20.431086   17465 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18429-15072/.minikube/ca.pem
	I0318 04:37:20.431195   17465 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18429-15072/.minikube/ca.pem (1082 bytes)
	I0318 04:37:20.431396   17465 exec_runner.go:144] found /Users/jenkins/minikube-integration/18429-15072/.minikube/cert.pem, removing ...
	I0318 04:37:20.431400   17465 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18429-15072/.minikube/cert.pem
	I0318 04:37:20.431457   17465 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18429-15072/.minikube/cert.pem (1123 bytes)
	I0318 04:37:20.431584   17465 exec_runner.go:144] found /Users/jenkins/minikube-integration/18429-15072/.minikube/key.pem, removing ...
	I0318 04:37:20.431588   17465 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18429-15072/.minikube/key.pem
	I0318 04:37:20.431643   17465 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18429-15072/.minikube/key.pem (1679 bytes)
	I0318 04:37:20.431756   17465 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-126000 san=[127.0.0.1 localhost minikube stopped-upgrade-126000]
	I0318 04:37:20.614100   17465 provision.go:177] copyRemoteCerts
	I0318 04:37:20.614145   17465 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 04:37:20.614155   17465 sshutil.go:53] new ssh client: &{IP:localhost Port:53501 SSHKeyPath:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/stopped-upgrade-126000/id_rsa Username:docker}
	I0318 04:37:20.646386   17465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0318 04:37:20.653178   17465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0318 04:37:20.660927   17465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 04:37:20.667912   17465 provision.go:87] duration metric: took 236.916041ms to configureAuth
	I0318 04:37:20.667921   17465 buildroot.go:189] setting minikube options for container-runtime
	I0318 04:37:20.668013   17465 config.go:182] Loaded profile config "stopped-upgrade-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 04:37:20.668049   17465 main.go:141] libmachine: Using SSH client type: native
	I0318 04:37:20.668143   17465 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a71bf0] 0x102a74450 <nil>  [] 0s} localhost 53501 <nil> <nil>}
	I0318 04:37:20.668148   17465 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0318 04:37:20.722073   17465 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0318 04:37:20.722082   17465 buildroot.go:70] root file system type: tmpfs
	I0318 04:37:20.722135   17465 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0318 04:37:20.722174   17465 main.go:141] libmachine: Using SSH client type: native
	I0318 04:37:20.722277   17465 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a71bf0] 0x102a74450 <nil>  [] 0s} localhost 53501 <nil> <nil>}
	I0318 04:37:20.722309   17465 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0318 04:37:20.781393   17465 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0318 04:37:20.781453   17465 main.go:141] libmachine: Using SSH client type: native
	I0318 04:37:20.781620   17465 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a71bf0] 0x102a74450 <nil>  [] 0s} localhost 53501 <nil> <nil>}
	I0318 04:37:20.781628   17465 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0318 04:37:21.144902   17465 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0318 04:37:21.144917   17465 machine.go:97] duration metric: took 929.192708ms to provisionDockerMachine
	I0318 04:37:21.144923   17465 start.go:293] postStartSetup for "stopped-upgrade-126000" (driver="qemu2")
	I0318 04:37:21.144930   17465 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 04:37:21.144997   17465 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 04:37:21.145006   17465 sshutil.go:53] new ssh client: &{IP:localhost Port:53501 SSHKeyPath:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/stopped-upgrade-126000/id_rsa Username:docker}
	I0318 04:37:21.174361   17465 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 04:37:21.175659   17465 info.go:137] Remote host: Buildroot 2021.02.12
	I0318 04:37:21.175667   17465 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18429-15072/.minikube/addons for local assets ...
	I0318 04:37:21.175738   17465 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18429-15072/.minikube/files for local assets ...
	I0318 04:37:21.175851   17465 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18429-15072/.minikube/files/etc/ssl/certs/154812.pem -> 154812.pem in /etc/ssl/certs
	I0318 04:37:21.175979   17465 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 04:37:21.178558   17465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/files/etc/ssl/certs/154812.pem --> /etc/ssl/certs/154812.pem (1708 bytes)
	I0318 04:37:21.185530   17465 start.go:296] duration metric: took 40.603042ms for postStartSetup
	I0318 04:37:21.185544   17465 fix.go:56] duration metric: took 21.37402925s for fixHost
	I0318 04:37:21.185580   17465 main.go:141] libmachine: Using SSH client type: native
	I0318 04:37:21.185677   17465 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a71bf0] 0x102a74450 <nil>  [] 0s} localhost 53501 <nil> <nil>}
	I0318 04:37:21.185682   17465 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 04:37:21.239639   17465 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710761841.298580463
	
	I0318 04:37:21.239650   17465 fix.go:216] guest clock: 1710761841.298580463
	I0318 04:37:21.239654   17465 fix.go:229] Guest: 2024-03-18 04:37:21.298580463 -0700 PDT Remote: 2024-03-18 04:37:21.185546 -0700 PDT m=+21.507085043 (delta=113.034463ms)
	I0318 04:37:21.239665   17465 fix.go:200] guest clock delta is within tolerance: 113.034463ms
	I0318 04:37:21.239668   17465 start.go:83] releasing machines lock for "stopped-upgrade-126000", held for 21.428164666s
	I0318 04:37:21.239739   17465 ssh_runner.go:195] Run: cat /version.json
	I0318 04:37:21.239751   17465 sshutil.go:53] new ssh client: &{IP:localhost Port:53501 SSHKeyPath:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/stopped-upgrade-126000/id_rsa Username:docker}
	I0318 04:37:21.239739   17465 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 04:37:21.239782   17465 sshutil.go:53] new ssh client: &{IP:localhost Port:53501 SSHKeyPath:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/stopped-upgrade-126000/id_rsa Username:docker}
	W0318 04:37:21.240414   17465 sshutil.go:64] dial failure (will retry): dial tcp [::1]:53501: connect: connection refused
	I0318 04:37:21.240440   17465 retry.go:31] will retry after 333.505157ms: dial tcp [::1]:53501: connect: connection refused
	W0318 04:37:21.267535   17465 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0318 04:37:21.267586   17465 ssh_runner.go:195] Run: systemctl --version
	I0318 04:37:21.269257   17465 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 04:37:21.270887   17465 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 04:37:21.270915   17465 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0318 04:37:21.273655   17465 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0318 04:37:21.278454   17465 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 04:37:21.278462   17465 start.go:494] detecting cgroup driver to use...
	I0318 04:37:21.278541   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 04:37:21.284584   17465 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0318 04:37:21.287954   17465 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0318 04:37:21.290693   17465 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0318 04:37:21.290714   17465 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0318 04:37:21.293671   17465 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 04:37:21.297050   17465 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0318 04:37:21.300306   17465 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 04:37:21.303135   17465 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 04:37:21.306001   17465 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0318 04:37:21.309253   17465 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 04:37:21.312232   17465 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 04:37:21.314948   17465 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 04:37:21.389860   17465 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0318 04:37:21.399182   17465 start.go:494] detecting cgroup driver to use...
	I0318 04:37:21.399248   17465 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0318 04:37:21.404783   17465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 04:37:21.409566   17465 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 04:37:21.416402   17465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 04:37:21.420772   17465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 04:37:21.425054   17465 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0318 04:37:21.487086   17465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 04:37:21.492270   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 04:37:21.498056   17465 ssh_runner.go:195] Run: which cri-dockerd
	I0318 04:37:21.499420   17465 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0318 04:37:21.502218   17465 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0318 04:37:21.507186   17465 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0318 04:37:21.593020   17465 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0318 04:37:21.655283   17465 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0318 04:37:21.655434   17465 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0318 04:37:21.661804   17465 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 04:37:21.812270   17465 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 04:37:22.926258   17465 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.114009458s)
	I0318 04:37:22.926333   17465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0318 04:37:22.930815   17465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 04:37:22.935584   17465 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0318 04:37:23.011102   17465 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0318 04:37:23.081664   17465 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 04:37:23.159223   17465 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0318 04:37:23.165062   17465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 04:37:23.169852   17465 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 04:37:23.247275   17465 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0318 04:37:23.287056   17465 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0318 04:37:23.287145   17465 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0318 04:37:23.290567   17465 start.go:562] Will wait 60s for crictl version
	I0318 04:37:23.290621   17465 ssh_runner.go:195] Run: which crictl
	I0318 04:37:23.291898   17465 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 04:37:23.306926   17465 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0318 04:37:23.307015   17465 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 04:37:23.323847   17465 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 04:37:23.344262   17465 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0318 04:37:23.344340   17465 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0318 04:37:23.345603   17465 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 04:37:23.349481   17465 kubeadm.go:877] updating cluster {Name:stopped-upgrade-126000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53534 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-126000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0318 04:37:23.349524   17465 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0318 04:37:23.349568   17465 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 04:37:23.359942   17465 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0318 04:37:23.359957   17465 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0318 04:37:23.360005   17465 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0318 04:37:23.362828   17465 ssh_runner.go:195] Run: which lz4
	I0318 04:37:23.363990   17465 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 04:37:23.365199   17465 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 04:37:23.365207   17465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0318 04:37:24.098665   17465 docker.go:649] duration metric: took 734.727084ms to copy over tarball
	I0318 04:37:24.098738   17465 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 04:37:25.272747   17465 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.174024958s)
	I0318 04:37:25.272769   17465 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 04:37:25.288847   17465 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0318 04:37:25.292395   17465 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0318 04:37:25.297477   17465 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 04:37:25.374093   17465 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 04:37:26.903609   17465 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.529549458s)
	I0318 04:37:26.903719   17465 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 04:37:26.914375   17465 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0318 04:37:26.914387   17465 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0318 04:37:26.914392   17465 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 04:37:26.922788   17465 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0318 04:37:26.922856   17465 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0318 04:37:26.922956   17465 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 04:37:26.923009   17465 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 04:37:26.923067   17465 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0318 04:37:26.923107   17465 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 04:37:26.923173   17465 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0318 04:37:26.923310   17465 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0318 04:37:26.932001   17465 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 04:37:26.932096   17465 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0318 04:37:26.932117   17465 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0318 04:37:26.932159   17465 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 04:37:26.932375   17465 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0318 04:37:26.932622   17465 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0318 04:37:26.932753   17465 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0318 04:37:26.932623   17465 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	W0318 04:37:28.903872   17465 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0318 04:37:28.904370   17465 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0318 04:37:28.934757   17465 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0318 04:37:28.934800   17465 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 04:37:28.934901   17465 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0318 04:37:28.953519   17465 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0318 04:37:28.953658   17465 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0318 04:37:28.956135   17465 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0318 04:37:28.956153   17465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0318 04:37:28.991740   17465 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0318 04:37:28.994623   17465 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0318 04:37:28.994633   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0318 04:37:29.002936   17465 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0318 04:37:29.002955   17465 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0318 04:37:29.003006   17465 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0318 04:37:29.034339   17465 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 04:37:29.041410   17465 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0318 04:37:29.046398   17465 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0318 04:37:29.058471   17465 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0318 04:37:29.058562   17465 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0318 04:37:29.058636   17465 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0318 04:37:29.058657   17465 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 04:37:29.058702   17465 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 04:37:29.061505   17465 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0318 04:37:29.066598   17465 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0318 04:37:29.066618   17465 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0318 04:37:29.066674   17465 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0318 04:37:29.067290   17465 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0318 04:37:29.067299   17465 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0318 04:37:29.067320   17465 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0318 04:37:29.074654   17465 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0318 04:37:29.088426   17465 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0318 04:37:29.098014   17465 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0318 04:37:29.098022   17465 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0318 04:37:29.098034   17465 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0318 04:37:29.098066   17465 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0318 04:37:29.098078   17465 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0318 04:37:29.098118   17465 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0318 04:37:29.102634   17465 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0318 04:37:29.102651   17465 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0318 04:37:29.102703   17465 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0318 04:37:29.103688   17465 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0318 04:37:29.103705   17465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0318 04:37:29.110065   17465 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0318 04:37:29.115573   17465 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0318 04:37:29.117219   17465 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0318 04:37:29.117228   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0318 04:37:29.143237   17465 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W0318 04:37:29.475553   17465 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0318 04:37:29.476171   17465 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 04:37:29.514530   17465 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0318 04:37:29.514586   17465 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 04:37:29.514693   17465 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 04:37:29.541310   17465 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0318 04:37:29.541466   17465 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0318 04:37:29.543795   17465 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0318 04:37:29.543830   17465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0318 04:37:29.573515   17465 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0318 04:37:29.573534   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0318 04:37:32.035317   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:37:29.816377   17465 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0318 04:37:29.816417   17465 cache_images.go:92] duration metric: took 2.902115s to LoadCachedImages
	W0318 04:37:29.816454   17465 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0318 04:37:29.816461   17465 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0318 04:37:29.816515   17465 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-126000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-126000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 04:37:29.816589   17465 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0318 04:37:29.830224   17465 cni.go:84] Creating CNI manager for ""
	I0318 04:37:29.830236   17465 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:37:29.830240   17465 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 04:37:29.830248   17465 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-126000 NodeName:stopped-upgrade-126000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 04:37:29.830314   17465 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-126000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 04:37:29.830362   17465 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0318 04:37:29.833490   17465 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 04:37:29.833515   17465 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 04:37:29.836579   17465 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0318 04:37:29.841605   17465 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 04:37:29.846586   17465 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0318 04:37:29.851844   17465 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0318 04:37:29.853119   17465 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 04:37:29.856848   17465 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 04:37:29.934762   17465 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 04:37:29.946873   17465 certs.go:68] Setting up /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/stopped-upgrade-126000 for IP: 10.0.2.15
	I0318 04:37:29.946885   17465 certs.go:194] generating shared ca certs ...
	I0318 04:37:29.946894   17465 certs.go:226] acquiring lock for ca certs: {Name:mk30e64e6a2f5ccd376efb026974022e10fa3463 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:37:29.947064   17465 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18429-15072/.minikube/ca.key
	I0318 04:37:29.947112   17465 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18429-15072/.minikube/proxy-client-ca.key
	I0318 04:37:29.947118   17465 certs.go:256] generating profile certs ...
	I0318 04:37:29.947192   17465 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/stopped-upgrade-126000/client.key
	I0318 04:37:29.947210   17465 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/stopped-upgrade-126000/apiserver.key.d0815522
	I0318 04:37:29.947220   17465 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/stopped-upgrade-126000/apiserver.crt.d0815522 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0318 04:37:30.029798   17465 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/stopped-upgrade-126000/apiserver.crt.d0815522 ...
	I0318 04:37:30.029813   17465 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/stopped-upgrade-126000/apiserver.crt.d0815522: {Name:mk847418b6cee3fea3538d3f49f23aaf8cc83511 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:37:30.030102   17465 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/stopped-upgrade-126000/apiserver.key.d0815522 ...
	I0318 04:37:30.030109   17465 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/stopped-upgrade-126000/apiserver.key.d0815522: {Name:mk9618f09b3b800abe737fa4c492492ed007f7b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:37:30.030240   17465 certs.go:381] copying /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/stopped-upgrade-126000/apiserver.crt.d0815522 -> /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/stopped-upgrade-126000/apiserver.crt
	I0318 04:37:30.030375   17465 certs.go:385] copying /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/stopped-upgrade-126000/apiserver.key.d0815522 -> /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/stopped-upgrade-126000/apiserver.key
	I0318 04:37:30.030515   17465 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/stopped-upgrade-126000/proxy-client.key
	I0318 04:37:30.030641   17465 certs.go:484] found cert: /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/15481.pem (1338 bytes)
	W0318 04:37:30.030670   17465 certs.go:480] ignoring /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/15481_empty.pem, impossibly tiny 0 bytes
	I0318 04:37:30.030675   17465 certs.go:484] found cert: /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca-key.pem (1679 bytes)
	I0318 04:37:30.030691   17465 certs.go:484] found cert: /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem (1082 bytes)
	I0318 04:37:30.030706   17465 certs.go:484] found cert: /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem (1123 bytes)
	I0318 04:37:30.030722   17465 certs.go:484] found cert: /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/key.pem (1679 bytes)
	I0318 04:37:30.030759   17465 certs.go:484] found cert: /Users/jenkins/minikube-integration/18429-15072/.minikube/files/etc/ssl/certs/154812.pem (1708 bytes)
	I0318 04:37:30.031063   17465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 04:37:30.037879   17465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0318 04:37:30.044875   17465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 04:37:30.052235   17465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 04:37:30.059129   17465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/stopped-upgrade-126000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0318 04:37:30.065622   17465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/stopped-upgrade-126000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 04:37:30.072942   17465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/stopped-upgrade-126000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 04:37:30.080402   17465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/stopped-upgrade-126000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 04:37:30.087706   17465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/files/etc/ssl/certs/154812.pem --> /usr/share/ca-certificates/154812.pem (1708 bytes)
	I0318 04:37:30.094457   17465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 04:37:30.101256   17465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/15481.pem --> /usr/share/ca-certificates/15481.pem (1338 bytes)
	I0318 04:37:30.108567   17465 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 04:37:30.113850   17465 ssh_runner.go:195] Run: openssl version
	I0318 04:37:30.115812   17465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154812.pem && ln -fs /usr/share/ca-certificates/154812.pem /etc/ssl/certs/154812.pem"
	I0318 04:37:30.118729   17465 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154812.pem
	I0318 04:37:30.120156   17465 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 11:20 /usr/share/ca-certificates/154812.pem
	I0318 04:37:30.120176   17465 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154812.pem
	I0318 04:37:30.122022   17465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154812.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 04:37:30.125348   17465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 04:37:30.128673   17465 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 04:37:30.130273   17465 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 11:33 /usr/share/ca-certificates/minikubeCA.pem
	I0318 04:37:30.130291   17465 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 04:37:30.131985   17465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 04:37:30.134819   17465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15481.pem && ln -fs /usr/share/ca-certificates/15481.pem /etc/ssl/certs/15481.pem"
	I0318 04:37:30.137671   17465 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15481.pem
	I0318 04:37:30.139291   17465 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 11:20 /usr/share/ca-certificates/15481.pem
	I0318 04:37:30.139315   17465 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15481.pem
	I0318 04:37:30.141018   17465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15481.pem /etc/ssl/certs/51391683.0"
	I0318 04:37:30.144476   17465 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 04:37:30.146062   17465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 04:37:30.148491   17465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 04:37:30.150639   17465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 04:37:30.152557   17465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 04:37:30.154357   17465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 04:37:30.156177   17465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 04:37:30.158092   17465 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-126000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53534 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-126000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0318 04:37:30.158167   17465 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0318 04:37:30.169084   17465 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 04:37:30.172272   17465 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 04:37:30.172280   17465 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 04:37:30.172282   17465 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 04:37:30.172309   17465 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 04:37:30.175188   17465 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 04:37:30.175464   17465 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-126000" does not appear in /Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:37:30.175565   17465 kubeconfig.go:62] /Users/jenkins/minikube-integration/18429-15072/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-126000" cluster setting kubeconfig missing "stopped-upgrade-126000" context setting]
	I0318 04:37:30.175748   17465 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18429-15072/kubeconfig: {Name:mkeb86e27ccdf30a065b43661cfe2af2dc198b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:37:30.176161   17465 kapi.go:59] client config for stopped-upgrade-126000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/stopped-upgrade-126000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/stopped-upgrade-126000/client.key", CAFile:"/Users/jenkins/minikube-integration/18429-15072/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103d62a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 04:37:30.176467   17465 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 04:37:30.179094   17465 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-126000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0318 04:37:30.179100   17465 kubeadm.go:1154] stopping kube-system containers ...
	I0318 04:37:30.179141   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0318 04:37:30.189510   17465 docker.go:483] Stopping containers: [7dacaac7f891 9b8ffa5f8458 d579e22e148e 8b75879fc7bf fb25a67bf414 03620a2d9297 a64bfd63de1d eee08746d061]
	I0318 04:37:30.189593   17465 ssh_runner.go:195] Run: docker stop 7dacaac7f891 9b8ffa5f8458 d579e22e148e 8b75879fc7bf fb25a67bf414 03620a2d9297 a64bfd63de1d eee08746d061
	I0318 04:37:30.200301   17465 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 04:37:30.206149   17465 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 04:37:30.209308   17465 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 04:37:30.209314   17465 kubeadm.go:156] found existing configuration files:
	
	I0318 04:37:30.209347   17465 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53534 /etc/kubernetes/admin.conf
	I0318 04:37:30.212181   17465 kubeadm.go:162] "https://control-plane.minikube.internal:53534" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53534 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 04:37:30.212203   17465 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 04:37:30.214747   17465 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53534 /etc/kubernetes/kubelet.conf
	I0318 04:37:30.217353   17465 kubeadm.go:162] "https://control-plane.minikube.internal:53534" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53534 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 04:37:30.217375   17465 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 04:37:30.220342   17465 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53534 /etc/kubernetes/controller-manager.conf
	I0318 04:37:30.222939   17465 kubeadm.go:162] "https://control-plane.minikube.internal:53534" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53534 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 04:37:30.222958   17465 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 04:37:30.225583   17465 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53534 /etc/kubernetes/scheduler.conf
	I0318 04:37:30.228387   17465 kubeadm.go:162] "https://control-plane.minikube.internal:53534" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53534 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 04:37:30.228408   17465 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 04:37:30.231077   17465 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 04:37:30.233721   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 04:37:30.255583   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 04:37:30.673176   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 04:37:30.806427   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 04:37:30.829387   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 04:37:30.851740   17465 api_server.go:52] waiting for apiserver process to appear ...
	I0318 04:37:30.851827   17465 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 04:37:31.353951   17465 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 04:37:31.853650   17465 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 04:37:31.858129   17465 api_server.go:72] duration metric: took 1.006424s to wait for apiserver process to appear ...
	I0318 04:37:31.858139   17465 api_server.go:88] waiting for apiserver healthz status ...
	I0318 04:37:31.858153   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:37:37.037357   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:37:37.037446   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:37:37.048903   17322 logs.go:276] 2 containers: [8cbe0799ab57 7147d93a4ffc]
	I0318 04:37:37.048971   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:37:37.058802   17322 logs.go:276] 2 containers: [3597b574be66 c52784390773]
	I0318 04:37:37.058875   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:37:37.070571   17322 logs.go:276] 1 containers: [9445dc83224c]
	I0318 04:37:37.070648   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:37:37.084792   17322 logs.go:276] 2 containers: [083e5435c9c9 2ed27f531543]
	I0318 04:37:37.084867   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:37:37.095303   17322 logs.go:276] 1 containers: [313af99b8193]
	I0318 04:37:37.095382   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:37:37.106158   17322 logs.go:276] 2 containers: [c9b5a8296878 eca65001ee00]
	I0318 04:37:37.106230   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:37:37.117755   17322 logs.go:276] 0 containers: []
	W0318 04:37:37.117769   17322 logs.go:278] No container was found matching "kindnet"
	I0318 04:37:37.117830   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:37:37.131150   17322 logs.go:276] 2 containers: [1fb3e049cd41 f4f9d7351a87]
	I0318 04:37:37.131169   17322 logs.go:123] Gathering logs for storage-provisioner [1fb3e049cd41] ...
	I0318 04:37:37.131174   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb3e049cd41"
	I0318 04:37:37.148569   17322 logs.go:123] Gathering logs for kube-apiserver [8cbe0799ab57] ...
	I0318 04:37:37.148579   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cbe0799ab57"
	I0318 04:37:37.163086   17322 logs.go:123] Gathering logs for etcd [c52784390773] ...
	I0318 04:37:37.163098   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52784390773"
	I0318 04:37:37.177809   17322 logs.go:123] Gathering logs for etcd [3597b574be66] ...
	I0318 04:37:37.177819   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3597b574be66"
	I0318 04:37:36.860111   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:37:36.860141   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:37:37.191415   17322 logs.go:123] Gathering logs for coredns [9445dc83224c] ...
	I0318 04:37:37.191425   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9445dc83224c"
	I0318 04:37:37.203275   17322 logs.go:123] Gathering logs for dmesg ...
	I0318 04:37:37.203285   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:37:37.210530   17322 logs.go:123] Gathering logs for kube-apiserver [7147d93a4ffc] ...
	I0318 04:37:37.210541   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7147d93a4ffc"
	I0318 04:37:37.234522   17322 logs.go:123] Gathering logs for kube-scheduler [083e5435c9c9] ...
	I0318 04:37:37.234533   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083e5435c9c9"
	I0318 04:37:37.246368   17322 logs.go:123] Gathering logs for kube-controller-manager [eca65001ee00] ...
	I0318 04:37:37.246381   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca65001ee00"
	I0318 04:37:37.257867   17322 logs.go:123] Gathering logs for Docker ...
	I0318 04:37:37.257879   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:37:37.282486   17322 logs.go:123] Gathering logs for container status ...
	I0318 04:37:37.282498   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:37:37.300718   17322 logs.go:123] Gathering logs for kubelet ...
	I0318 04:37:37.300728   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 04:37:37.337727   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:37:37.337819   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:37:37.339484   17322 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:37:37.339490   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:37:37.376993   17322 logs.go:123] Gathering logs for kube-controller-manager [c9b5a8296878] ...
	I0318 04:37:37.377006   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b5a8296878"
	I0318 04:37:37.394337   17322 logs.go:123] Gathering logs for storage-provisioner [f4f9d7351a87] ...
	I0318 04:37:37.394352   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4f9d7351a87"
	I0318 04:37:37.405539   17322 logs.go:123] Gathering logs for kube-scheduler [2ed27f531543] ...
	I0318 04:37:37.405552   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed27f531543"
	I0318 04:37:37.419769   17322 logs.go:123] Gathering logs for kube-proxy [313af99b8193] ...
	I0318 04:37:37.419779   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313af99b8193"
	I0318 04:37:37.431499   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:37:37.431510   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 04:37:37.431536   17322 out.go:239] X Problems detected in kubelet:
	W0318 04:37:37.431542   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:37:37.431554   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:37:37.431558   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:37:37.431561   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:37:41.860201   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:37:41.860230   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:37:46.860746   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:37:46.860778   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:37:47.435393   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:37:51.861167   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:37:51.861232   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:37:52.437541   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:37:52.437723   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:37:52.456775   17322 logs.go:276] 2 containers: [8cbe0799ab57 7147d93a4ffc]
	I0318 04:37:52.456875   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:37:52.499627   17322 logs.go:276] 2 containers: [3597b574be66 c52784390773]
	I0318 04:37:52.499700   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:37:52.510707   17322 logs.go:276] 1 containers: [9445dc83224c]
	I0318 04:37:52.510783   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:37:52.522483   17322 logs.go:276] 2 containers: [083e5435c9c9 2ed27f531543]
	I0318 04:37:52.522560   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:37:52.532755   17322 logs.go:276] 1 containers: [313af99b8193]
	I0318 04:37:52.532824   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:37:52.543008   17322 logs.go:276] 2 containers: [c9b5a8296878 eca65001ee00]
	I0318 04:37:52.543078   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:37:52.553423   17322 logs.go:276] 0 containers: []
	W0318 04:37:52.553434   17322 logs.go:278] No container was found matching "kindnet"
	I0318 04:37:52.553493   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:37:52.565597   17322 logs.go:276] 2 containers: [1fb3e049cd41 f4f9d7351a87]
	I0318 04:37:52.565611   17322 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:37:52.565616   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:37:52.602402   17322 logs.go:123] Gathering logs for kube-apiserver [7147d93a4ffc] ...
	I0318 04:37:52.602413   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7147d93a4ffc"
	I0318 04:37:52.626705   17322 logs.go:123] Gathering logs for etcd [c52784390773] ...
	I0318 04:37:52.626715   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52784390773"
	I0318 04:37:52.641824   17322 logs.go:123] Gathering logs for kubelet ...
	I0318 04:37:52.641838   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 04:37:52.681526   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:37:52.681630   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:37:52.683370   17322 logs.go:123] Gathering logs for dmesg ...
	I0318 04:37:52.683377   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:37:52.687820   17322 logs.go:123] Gathering logs for kube-apiserver [8cbe0799ab57] ...
	I0318 04:37:52.687829   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cbe0799ab57"
	I0318 04:37:52.702150   17322 logs.go:123] Gathering logs for coredns [9445dc83224c] ...
	I0318 04:37:52.702159   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9445dc83224c"
	I0318 04:37:52.717290   17322 logs.go:123] Gathering logs for storage-provisioner [f4f9d7351a87] ...
	I0318 04:37:52.717300   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4f9d7351a87"
	I0318 04:37:52.728582   17322 logs.go:123] Gathering logs for etcd [3597b574be66] ...
	I0318 04:37:52.728596   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3597b574be66"
	I0318 04:37:52.742638   17322 logs.go:123] Gathering logs for kube-scheduler [2ed27f531543] ...
	I0318 04:37:52.742651   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed27f531543"
	I0318 04:37:52.757828   17322 logs.go:123] Gathering logs for kube-proxy [313af99b8193] ...
	I0318 04:37:52.757841   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313af99b8193"
	I0318 04:37:52.769569   17322 logs.go:123] Gathering logs for kube-controller-manager [c9b5a8296878] ...
	I0318 04:37:52.769586   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b5a8296878"
	I0318 04:37:52.786986   17322 logs.go:123] Gathering logs for storage-provisioner [1fb3e049cd41] ...
	I0318 04:37:52.786997   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb3e049cd41"
	I0318 04:37:52.798903   17322 logs.go:123] Gathering logs for kube-scheduler [083e5435c9c9] ...
	I0318 04:37:52.798915   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083e5435c9c9"
	I0318 04:37:52.809955   17322 logs.go:123] Gathering logs for kube-controller-manager [eca65001ee00] ...
	I0318 04:37:52.809967   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca65001ee00"
	I0318 04:37:52.822173   17322 logs.go:123] Gathering logs for Docker ...
	I0318 04:37:52.822184   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:37:52.850786   17322 logs.go:123] Gathering logs for container status ...
	I0318 04:37:52.850811   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:37:52.868968   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:37:52.868980   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 04:37:52.869010   17322 out.go:239] X Problems detected in kubelet:
	W0318 04:37:52.869014   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:37:52.869018   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:37:52.869047   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:37:52.869051   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:37:56.861901   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:37:56.861972   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:38:01.862981   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:38:01.863032   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:38:02.871289   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:38:06.863830   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:38:06.863889   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:38:07.873548   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:38:07.873930   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:38:07.913114   17322 logs.go:276] 2 containers: [8cbe0799ab57 7147d93a4ffc]
	I0318 04:38:07.913257   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:38:07.934320   17322 logs.go:276] 2 containers: [3597b574be66 c52784390773]
	I0318 04:38:07.934420   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:38:07.949273   17322 logs.go:276] 1 containers: [9445dc83224c]
	I0318 04:38:07.949354   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:38:07.961743   17322 logs.go:276] 2 containers: [083e5435c9c9 2ed27f531543]
	I0318 04:38:07.961821   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:38:07.972723   17322 logs.go:276] 1 containers: [313af99b8193]
	I0318 04:38:07.972813   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:38:07.983344   17322 logs.go:276] 2 containers: [c9b5a8296878 eca65001ee00]
	I0318 04:38:07.983407   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:38:07.993803   17322 logs.go:276] 0 containers: []
	W0318 04:38:07.993815   17322 logs.go:278] No container was found matching "kindnet"
	I0318 04:38:07.993877   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:38:08.004794   17322 logs.go:276] 2 containers: [1fb3e049cd41 f4f9d7351a87]
	I0318 04:38:08.004814   17322 logs.go:123] Gathering logs for etcd [3597b574be66] ...
	I0318 04:38:08.004820   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3597b574be66"
	I0318 04:38:08.022234   17322 logs.go:123] Gathering logs for coredns [9445dc83224c] ...
	I0318 04:38:08.022246   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9445dc83224c"
	I0318 04:38:08.033949   17322 logs.go:123] Gathering logs for kube-scheduler [2ed27f531543] ...
	I0318 04:38:08.033959   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2ed27f531543"
	I0318 04:38:08.048477   17322 logs.go:123] Gathering logs for kube-controller-manager [c9b5a8296878] ...
	I0318 04:38:08.048489   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c9b5a8296878"
	I0318 04:38:08.065231   17322 logs.go:123] Gathering logs for Docker ...
	I0318 04:38:08.065243   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:38:08.088089   17322 logs.go:123] Gathering logs for kubelet ...
	I0318 04:38:08.088097   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 04:38:08.124300   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:38:08.124393   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:38:08.126041   17322 logs.go:123] Gathering logs for dmesg ...
	I0318 04:38:08.126046   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:38:08.130285   17322 logs.go:123] Gathering logs for kube-apiserver [7147d93a4ffc] ...
	I0318 04:38:08.130293   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7147d93a4ffc"
	I0318 04:38:08.155591   17322 logs.go:123] Gathering logs for container status ...
	I0318 04:38:08.155602   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:38:08.168224   17322 logs.go:123] Gathering logs for kube-apiserver [8cbe0799ab57] ...
	I0318 04:38:08.168235   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8cbe0799ab57"
	I0318 04:38:08.182358   17322 logs.go:123] Gathering logs for etcd [c52784390773] ...
	I0318 04:38:08.182369   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c52784390773"
	I0318 04:38:08.196857   17322 logs.go:123] Gathering logs for storage-provisioner [1fb3e049cd41] ...
	I0318 04:38:08.196867   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1fb3e049cd41"
	I0318 04:38:08.214086   17322 logs.go:123] Gathering logs for kube-controller-manager [eca65001ee00] ...
	I0318 04:38:08.214096   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eca65001ee00"
	I0318 04:38:08.225416   17322 logs.go:123] Gathering logs for storage-provisioner [f4f9d7351a87] ...
	I0318 04:38:08.225426   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f4f9d7351a87"
	I0318 04:38:08.237007   17322 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:38:08.237018   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:38:08.277089   17322 logs.go:123] Gathering logs for kube-scheduler [083e5435c9c9] ...
	I0318 04:38:08.277103   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 083e5435c9c9"
	I0318 04:38:08.289241   17322 logs.go:123] Gathering logs for kube-proxy [313af99b8193] ...
	I0318 04:38:08.289252   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 313af99b8193"
	I0318 04:38:08.301181   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:38:08.301191   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 04:38:08.301220   17322 out.go:239] X Problems detected in kubelet:
	W0318 04:38:08.301225   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:38:08.301228   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:38:08.301233   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:38:08.301236   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:38:11.864549   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:38:11.864647   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:38:16.866866   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:38:16.866912   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:38:18.305062   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:38:23.305969   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:38:23.306068   17322 kubeadm.go:591] duration metric: took 4m7.788113917s to restartPrimaryControlPlane
	W0318 04:38:23.306137   17322 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 04:38:23.306168   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0318 04:38:24.309637   17322 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.0034915s)
	I0318 04:38:24.309704   17322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 04:38:24.314619   17322 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 04:38:24.317434   17322 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 04:38:24.320096   17322 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 04:38:24.320103   17322 kubeadm.go:156] found existing configuration files:
	
	I0318 04:38:24.320127   17322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53312 /etc/kubernetes/admin.conf
	I0318 04:38:24.322850   17322 kubeadm.go:162] "https://control-plane.minikube.internal:53312" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53312 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 04:38:24.322875   17322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 04:38:24.325279   17322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53312 /etc/kubernetes/kubelet.conf
	I0318 04:38:24.328001   17322 kubeadm.go:162] "https://control-plane.minikube.internal:53312" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53312 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 04:38:24.328020   17322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 04:38:24.331161   17322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53312 /etc/kubernetes/controller-manager.conf
	I0318 04:38:24.333778   17322 kubeadm.go:162] "https://control-plane.minikube.internal:53312" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53312 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 04:38:24.333798   17322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 04:38:24.336494   17322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53312 /etc/kubernetes/scheduler.conf
	I0318 04:38:24.339576   17322 kubeadm.go:162] "https://control-plane.minikube.internal:53312" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53312 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 04:38:24.339597   17322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 04:38:24.342251   17322 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 04:38:24.358093   17322 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0318 04:38:24.358134   17322 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 04:38:24.409299   17322 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 04:38:24.409372   17322 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 04:38:24.409417   17322 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 04:38:24.458776   17322 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 04:38:21.868994   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:38:21.869076   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:38:24.465665   17322 out.go:204]   - Generating certificates and keys ...
	I0318 04:38:24.465701   17322 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 04:38:24.465735   17322 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 04:38:24.465777   17322 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 04:38:24.465809   17322 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 04:38:24.465841   17322 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 04:38:24.465878   17322 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 04:38:24.465911   17322 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 04:38:24.465948   17322 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 04:38:24.465981   17322 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 04:38:24.466024   17322 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 04:38:24.466048   17322 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 04:38:24.466075   17322 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 04:38:24.606292   17322 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 04:38:24.696706   17322 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 04:38:24.891031   17322 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 04:38:25.001632   17322 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 04:38:25.031467   17322 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 04:38:25.031857   17322 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 04:38:25.031881   17322 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 04:38:25.108078   17322 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 04:38:25.111424   17322 out.go:204]   - Booting up control plane ...
	I0318 04:38:25.111466   17322 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 04:38:25.111504   17322 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 04:38:25.111535   17322 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 04:38:25.113822   17322 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 04:38:25.114593   17322 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 04:38:26.871210   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:38:26.871233   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:38:29.619871   17322 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.505105 seconds
	I0318 04:38:29.620047   17322 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 04:38:29.629674   17322 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 04:38:30.141842   17322 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 04:38:30.142073   17322 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-738000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 04:38:30.646676   17322 kubeadm.go:309] [bootstrap-token] Using token: 23utxv.u7ge82ksglucw1qd
	I0318 04:38:30.653522   17322 out.go:204]   - Configuring RBAC rules ...
	I0318 04:38:30.653580   17322 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 04:38:30.653621   17322 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 04:38:30.657489   17322 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 04:38:30.658419   17322 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 04:38:30.659158   17322 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 04:38:30.660121   17322 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 04:38:30.663232   17322 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 04:38:30.833329   17322 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 04:38:31.051614   17322 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 04:38:31.052139   17322 kubeadm.go:309] 
	I0318 04:38:31.052172   17322 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 04:38:31.052176   17322 kubeadm.go:309] 
	I0318 04:38:31.052215   17322 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 04:38:31.052219   17322 kubeadm.go:309] 
	I0318 04:38:31.052231   17322 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 04:38:31.052284   17322 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 04:38:31.052311   17322 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 04:38:31.052315   17322 kubeadm.go:309] 
	I0318 04:38:31.052345   17322 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 04:38:31.052349   17322 kubeadm.go:309] 
	I0318 04:38:31.052379   17322 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 04:38:31.052383   17322 kubeadm.go:309] 
	I0318 04:38:31.052408   17322 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 04:38:31.052444   17322 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 04:38:31.052479   17322 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 04:38:31.052483   17322 kubeadm.go:309] 
	I0318 04:38:31.052524   17322 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 04:38:31.052561   17322 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 04:38:31.052564   17322 kubeadm.go:309] 
	I0318 04:38:31.052626   17322 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 23utxv.u7ge82ksglucw1qd \
	I0318 04:38:31.052679   17322 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:2762dffea2ede86231df0e7bc748eefca9b65ca5bd96e5f605bd5b60ef0281dd \
	I0318 04:38:31.052692   17322 kubeadm.go:309] 	--control-plane 
	I0318 04:38:31.052696   17322 kubeadm.go:309] 
	I0318 04:38:31.052738   17322 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 04:38:31.052744   17322 kubeadm.go:309] 
	I0318 04:38:31.052779   17322 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 23utxv.u7ge82ksglucw1qd \
	I0318 04:38:31.052827   17322 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:2762dffea2ede86231df0e7bc748eefca9b65ca5bd96e5f605bd5b60ef0281dd 
	I0318 04:38:31.052882   17322 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 04:38:31.053003   17322 cni.go:84] Creating CNI manager for ""
	I0318 04:38:31.053012   17322 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:38:31.057479   17322 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 04:38:31.069439   17322 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 04:38:31.073061   17322 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 04:38:31.077650   17322 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 04:38:31.077698   17322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 04:38:31.077698   17322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-738000 minikube.k8s.io/updated_at=2024_03_18T04_38_31_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a minikube.k8s.io/name=running-upgrade-738000 minikube.k8s.io/primary=true
	I0318 04:38:31.080695   17322 ops.go:34] apiserver oom_adj: -16
	I0318 04:38:31.125629   17322 kubeadm.go:1107] duration metric: took 47.973666ms to wait for elevateKubeSystemPrivileges
	W0318 04:38:31.125721   17322 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 04:38:31.125727   17322 kubeadm.go:393] duration metric: took 4m15.623333917s to StartCluster
	I0318 04:38:31.125738   17322 settings.go:142] acquiring lock: {Name:mk8634ba9e118796c1213288fbf27edefcbb67ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:38:31.125890   17322 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:38:31.126306   17322 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18429-15072/kubeconfig: {Name:mkeb86e27ccdf30a065b43661cfe2af2dc198b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:38:31.126482   17322 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:38:31.131434   17322 out.go:177] * Verifying Kubernetes components...
	I0318 04:38:31.126567   17322 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 04:38:31.126675   17322 config.go:182] Loaded profile config "running-upgrade-738000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 04:38:31.139380   17322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 04:38:31.139385   17322 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-738000"
	I0318 04:38:31.139388   17322 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-738000"
	I0318 04:38:31.139395   17322 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-738000"
	I0318 04:38:31.139448   17322 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-738000"
	W0318 04:38:31.139452   17322 addons.go:243] addon storage-provisioner should already be in state true
	I0318 04:38:31.139467   17322 host.go:66] Checking if "running-upgrade-738000" exists ...
	I0318 04:38:31.140980   17322 kapi.go:59] client config for running-upgrade-738000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/running-upgrade-738000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/running-upgrade-738000/client.key", CAFile:"/Users/jenkins/minikube-integration/18429-15072/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103bb2a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 04:38:31.141776   17322 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-738000"
	W0318 04:38:31.141784   17322 addons.go:243] addon default-storageclass should already be in state true
	I0318 04:38:31.141794   17322 host.go:66] Checking if "running-upgrade-738000" exists ...
	I0318 04:38:31.146585   17322 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 04:38:31.150422   17322 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 04:38:31.150431   17322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 04:38:31.150440   17322 sshutil.go:53] new ssh client: &{IP:localhost Port:53280 SSHKeyPath:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/running-upgrade-738000/id_rsa Username:docker}
	I0318 04:38:31.151284   17322 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 04:38:31.151292   17322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 04:38:31.151297   17322 sshutil.go:53] new ssh client: &{IP:localhost Port:53280 SSHKeyPath:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/running-upgrade-738000/id_rsa Username:docker}
	I0318 04:38:31.216516   17322 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 04:38:31.221767   17322 api_server.go:52] waiting for apiserver process to appear ...
	I0318 04:38:31.221811   17322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 04:38:31.225549   17322 api_server.go:72] duration metric: took 99.055334ms to wait for apiserver process to appear ...
	I0318 04:38:31.225558   17322 api_server.go:88] waiting for apiserver healthz status ...
	I0318 04:38:31.225564   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:38:31.249463   17322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 04:38:31.254708   17322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 04:38:31.871821   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:38:31.871935   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:38:31.884850   17465 logs.go:276] 2 containers: [32dc048c4476 7dacaac7f891]
	I0318 04:38:31.884926   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:38:31.895440   17465 logs.go:276] 2 containers: [8c963566b500 8b75879fc7bf]
	I0318 04:38:31.895518   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:38:31.905552   17465 logs.go:276] 1 containers: [b66f543335d1]
	I0318 04:38:31.905634   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:38:31.916042   17465 logs.go:276] 2 containers: [e8b1c1b2cd19 d579e22e148e]
	I0318 04:38:31.916114   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:38:31.926937   17465 logs.go:276] 1 containers: [372e2774400e]
	I0318 04:38:31.927010   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:38:31.943917   17465 logs.go:276] 2 containers: [0be8d71b93fd fb25a67bf414]
	I0318 04:38:31.943999   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:38:31.954367   17465 logs.go:276] 0 containers: []
	W0318 04:38:31.954378   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:38:31.954441   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:38:31.965358   17465 logs.go:276] 2 containers: [81ae28fc685b 9e2192628e0f]
	I0318 04:38:31.965378   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:38:31.965384   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:38:31.969950   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:38:31.969960   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:38:32.084431   17465 logs.go:123] Gathering logs for kube-apiserver [7dacaac7f891] ...
	I0318 04:38:32.084445   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dacaac7f891"
	I0318 04:38:32.125163   17465 logs.go:123] Gathering logs for kube-scheduler [e8b1c1b2cd19] ...
	I0318 04:38:32.125179   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8b1c1b2cd19"
	I0318 04:38:32.137388   17465 logs.go:123] Gathering logs for kube-controller-manager [fb25a67bf414] ...
	I0318 04:38:32.137398   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb25a67bf414"
	I0318 04:38:32.150034   17465 logs.go:123] Gathering logs for storage-provisioner [81ae28fc685b] ...
	I0318 04:38:32.150047   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ae28fc685b"
	I0318 04:38:32.162200   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:38:32.162212   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:38:32.175259   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:38:32.175276   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:38:32.212513   17465 logs.go:123] Gathering logs for kube-apiserver [32dc048c4476] ...
	I0318 04:38:32.212526   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dc048c4476"
	I0318 04:38:32.226855   17465 logs.go:123] Gathering logs for etcd [8b75879fc7bf] ...
	I0318 04:38:32.226867   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b75879fc7bf"
	I0318 04:38:32.241525   17465 logs.go:123] Gathering logs for coredns [b66f543335d1] ...
	I0318 04:38:32.241535   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b66f543335d1"
	I0318 04:38:32.253227   17465 logs.go:123] Gathering logs for kube-proxy [372e2774400e] ...
	I0318 04:38:32.253239   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 372e2774400e"
	I0318 04:38:32.265619   17465 logs.go:123] Gathering logs for kube-controller-manager [0be8d71b93fd] ...
	I0318 04:38:32.265628   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0be8d71b93fd"
	I0318 04:38:32.283360   17465 logs.go:123] Gathering logs for storage-provisioner [9e2192628e0f] ...
	I0318 04:38:32.283370   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e2192628e0f"
	I0318 04:38:32.294919   17465 logs.go:123] Gathering logs for etcd [8c963566b500] ...
	I0318 04:38:32.294939   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c963566b500"
	I0318 04:38:32.309204   17465 logs.go:123] Gathering logs for kube-scheduler [d579e22e148e] ...
	I0318 04:38:32.309216   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d579e22e148e"
	I0318 04:38:32.325002   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:38:32.325012   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:38:36.227557   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:38:36.227593   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:38:34.852520   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:38:41.227741   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:38:41.227761   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:38:39.854773   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:38:39.855253   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:38:39.895623   17465 logs.go:276] 2 containers: [32dc048c4476 7dacaac7f891]
	I0318 04:38:39.895773   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:38:39.916781   17465 logs.go:276] 2 containers: [8c963566b500 8b75879fc7bf]
	I0318 04:38:39.916910   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:38:39.931610   17465 logs.go:276] 1 containers: [b66f543335d1]
	I0318 04:38:39.931703   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:38:39.944865   17465 logs.go:276] 2 containers: [e8b1c1b2cd19 d579e22e148e]
	I0318 04:38:39.944956   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:38:39.960346   17465 logs.go:276] 1 containers: [372e2774400e]
	I0318 04:38:39.960425   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:38:39.971783   17465 logs.go:276] 2 containers: [0be8d71b93fd fb25a67bf414]
	I0318 04:38:39.971867   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:38:39.986806   17465 logs.go:276] 0 containers: []
	W0318 04:38:39.986818   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:38:39.986882   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:38:39.997161   17465 logs.go:276] 2 containers: [81ae28fc685b 9e2192628e0f]
	I0318 04:38:39.997181   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:38:39.997187   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:38:40.032353   17465 logs.go:123] Gathering logs for kube-apiserver [7dacaac7f891] ...
	I0318 04:38:40.032366   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dacaac7f891"
	I0318 04:38:40.072187   17465 logs.go:123] Gathering logs for kube-scheduler [e8b1c1b2cd19] ...
	I0318 04:38:40.072200   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8b1c1b2cd19"
	I0318 04:38:40.084310   17465 logs.go:123] Gathering logs for kube-proxy [372e2774400e] ...
	I0318 04:38:40.084323   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 372e2774400e"
	I0318 04:38:40.095944   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:38:40.095955   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:38:40.100240   17465 logs.go:123] Gathering logs for kube-scheduler [d579e22e148e] ...
	I0318 04:38:40.100246   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d579e22e148e"
	I0318 04:38:40.115392   17465 logs.go:123] Gathering logs for kube-controller-manager [0be8d71b93fd] ...
	I0318 04:38:40.115403   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0be8d71b93fd"
	I0318 04:38:40.133954   17465 logs.go:123] Gathering logs for kube-controller-manager [fb25a67bf414] ...
	I0318 04:38:40.133964   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb25a67bf414"
	I0318 04:38:40.146194   17465 logs.go:123] Gathering logs for etcd [8c963566b500] ...
	I0318 04:38:40.146205   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c963566b500"
	I0318 04:38:40.160556   17465 logs.go:123] Gathering logs for etcd [8b75879fc7bf] ...
	I0318 04:38:40.160566   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b75879fc7bf"
	I0318 04:38:40.174919   17465 logs.go:123] Gathering logs for storage-provisioner [81ae28fc685b] ...
	I0318 04:38:40.174929   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ae28fc685b"
	I0318 04:38:40.186178   17465 logs.go:123] Gathering logs for storage-provisioner [9e2192628e0f] ...
	I0318 04:38:40.186190   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e2192628e0f"
	I0318 04:38:40.197676   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:38:40.197691   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:38:40.209748   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:38:40.209764   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:38:40.248506   17465 logs.go:123] Gathering logs for kube-apiserver [32dc048c4476] ...
	I0318 04:38:40.248514   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dc048c4476"
	I0318 04:38:40.262526   17465 logs.go:123] Gathering logs for coredns [b66f543335d1] ...
	I0318 04:38:40.262537   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b66f543335d1"
	I0318 04:38:40.273792   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:38:40.273801   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:38:42.800542   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:38:46.228407   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:38:46.228430   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:38:47.802803   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:38:47.803155   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:38:47.843477   17465 logs.go:276] 2 containers: [32dc048c4476 7dacaac7f891]
	I0318 04:38:47.843618   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:38:47.864138   17465 logs.go:276] 2 containers: [8c963566b500 8b75879fc7bf]
	I0318 04:38:47.864225   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:38:47.879045   17465 logs.go:276] 1 containers: [b66f543335d1]
	I0318 04:38:47.879122   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:38:47.891595   17465 logs.go:276] 2 containers: [e8b1c1b2cd19 d579e22e148e]
	I0318 04:38:47.891684   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:38:47.902132   17465 logs.go:276] 1 containers: [372e2774400e]
	I0318 04:38:47.902199   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:38:47.912951   17465 logs.go:276] 2 containers: [0be8d71b93fd fb25a67bf414]
	I0318 04:38:47.913013   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:38:47.923114   17465 logs.go:276] 0 containers: []
	W0318 04:38:47.923123   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:38:47.923174   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:38:47.937845   17465 logs.go:276] 2 containers: [81ae28fc685b 9e2192628e0f]
	I0318 04:38:47.937865   17465 logs.go:123] Gathering logs for etcd [8b75879fc7bf] ...
	I0318 04:38:47.937872   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b75879fc7bf"
	I0318 04:38:47.952257   17465 logs.go:123] Gathering logs for storage-provisioner [81ae28fc685b] ...
	I0318 04:38:47.952268   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ae28fc685b"
	I0318 04:38:47.963898   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:38:47.963909   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:38:47.988868   17465 logs.go:123] Gathering logs for kube-apiserver [7dacaac7f891] ...
	I0318 04:38:47.988877   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dacaac7f891"
	I0318 04:38:48.026252   17465 logs.go:123] Gathering logs for etcd [8c963566b500] ...
	I0318 04:38:48.026263   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c963566b500"
	I0318 04:38:48.039866   17465 logs.go:123] Gathering logs for kube-proxy [372e2774400e] ...
	I0318 04:38:48.039876   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 372e2774400e"
	I0318 04:38:48.052017   17465 logs.go:123] Gathering logs for storage-provisioner [9e2192628e0f] ...
	I0318 04:38:48.052028   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e2192628e0f"
	I0318 04:38:48.063548   17465 logs.go:123] Gathering logs for kube-controller-manager [0be8d71b93fd] ...
	I0318 04:38:48.063566   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0be8d71b93fd"
	I0318 04:38:48.081243   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:38:48.081254   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:38:48.120127   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:38:48.120146   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:38:48.124592   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:38:48.124602   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:38:48.160946   17465 logs.go:123] Gathering logs for kube-scheduler [e8b1c1b2cd19] ...
	I0318 04:38:48.160958   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8b1c1b2cd19"
	I0318 04:38:48.173661   17465 logs.go:123] Gathering logs for kube-scheduler [d579e22e148e] ...
	I0318 04:38:48.173672   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d579e22e148e"
	I0318 04:38:48.188570   17465 logs.go:123] Gathering logs for kube-apiserver [32dc048c4476] ...
	I0318 04:38:48.188580   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dc048c4476"
	I0318 04:38:48.202059   17465 logs.go:123] Gathering logs for coredns [b66f543335d1] ...
	I0318 04:38:48.202069   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b66f543335d1"
	I0318 04:38:48.212939   17465 logs.go:123] Gathering logs for kube-controller-manager [fb25a67bf414] ...
	I0318 04:38:48.212953   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb25a67bf414"
	I0318 04:38:48.224903   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:38:48.224917   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:38:51.228811   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:38:51.228853   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:38:50.738937   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:38:56.229441   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:38:56.229470   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:38:55.741477   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:38:55.741674   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:38:55.760073   17465 logs.go:276] 2 containers: [32dc048c4476 7dacaac7f891]
	I0318 04:38:55.760179   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:38:55.773988   17465 logs.go:276] 2 containers: [8c963566b500 8b75879fc7bf]
	I0318 04:38:55.774067   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:38:55.785524   17465 logs.go:276] 1 containers: [b66f543335d1]
	I0318 04:38:55.785597   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:38:55.795854   17465 logs.go:276] 2 containers: [e8b1c1b2cd19 d579e22e148e]
	I0318 04:38:55.795944   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:38:55.805782   17465 logs.go:276] 1 containers: [372e2774400e]
	I0318 04:38:55.805849   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:38:55.816435   17465 logs.go:276] 2 containers: [0be8d71b93fd fb25a67bf414]
	I0318 04:38:55.816523   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:38:55.827052   17465 logs.go:276] 0 containers: []
	W0318 04:38:55.827067   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:38:55.827131   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:38:55.837195   17465 logs.go:276] 2 containers: [81ae28fc685b 9e2192628e0f]
	I0318 04:38:55.837213   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:38:55.837219   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:38:55.874726   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:38:55.874740   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:38:55.887110   17465 logs.go:123] Gathering logs for kube-scheduler [e8b1c1b2cd19] ...
	I0318 04:38:55.887125   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8b1c1b2cd19"
	I0318 04:38:55.902767   17465 logs.go:123] Gathering logs for kube-scheduler [d579e22e148e] ...
	I0318 04:38:55.902780   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d579e22e148e"
	I0318 04:38:55.917608   17465 logs.go:123] Gathering logs for storage-provisioner [9e2192628e0f] ...
	I0318 04:38:55.917617   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e2192628e0f"
	I0318 04:38:55.929019   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:38:55.929030   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:38:55.954297   17465 logs.go:123] Gathering logs for kube-apiserver [7dacaac7f891] ...
	I0318 04:38:55.954320   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dacaac7f891"
	I0318 04:38:55.995047   17465 logs.go:123] Gathering logs for etcd [8b75879fc7bf] ...
	I0318 04:38:55.995058   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b75879fc7bf"
	I0318 04:38:56.009954   17465 logs.go:123] Gathering logs for kube-apiserver [32dc048c4476] ...
	I0318 04:38:56.009969   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dc048c4476"
	I0318 04:38:56.023840   17465 logs.go:123] Gathering logs for etcd [8c963566b500] ...
	I0318 04:38:56.023850   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c963566b500"
	I0318 04:38:56.038244   17465 logs.go:123] Gathering logs for coredns [b66f543335d1] ...
	I0318 04:38:56.038259   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b66f543335d1"
	I0318 04:38:56.050309   17465 logs.go:123] Gathering logs for kube-controller-manager [0be8d71b93fd] ...
	I0318 04:38:56.050321   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0be8d71b93fd"
	I0318 04:38:56.068053   17465 logs.go:123] Gathering logs for storage-provisioner [81ae28fc685b] ...
	I0318 04:38:56.068065   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ae28fc685b"
	I0318 04:38:56.080053   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:38:56.080063   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:38:56.117791   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:38:56.117802   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:38:56.121959   17465 logs.go:123] Gathering logs for kube-proxy [372e2774400e] ...
	I0318 04:38:56.121966   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 372e2774400e"
	I0318 04:38:56.133713   17465 logs.go:123] Gathering logs for kube-controller-manager [fb25a67bf414] ...
	I0318 04:38:56.133727   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb25a67bf414"
	I0318 04:38:58.647882   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:39:01.230631   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:39:01.230685   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0318 04:39:01.609657   17322 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0318 04:39:01.618853   17322 out.go:177] * Enabled addons: storage-provisioner
	I0318 04:39:01.626831   17322 addons.go:505] duration metric: took 30.501336917s for enable addons: enabled=[storage-provisioner]
	I0318 04:39:03.649089   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:39:03.649302   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:39:03.671318   17465 logs.go:276] 2 containers: [32dc048c4476 7dacaac7f891]
	I0318 04:39:03.671409   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:39:03.683689   17465 logs.go:276] 2 containers: [8c963566b500 8b75879fc7bf]
	I0318 04:39:03.683776   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:39:03.697846   17465 logs.go:276] 1 containers: [b66f543335d1]
	I0318 04:39:03.697913   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:39:03.708142   17465 logs.go:276] 2 containers: [e8b1c1b2cd19 d579e22e148e]
	I0318 04:39:03.708218   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:39:03.718631   17465 logs.go:276] 1 containers: [372e2774400e]
	I0318 04:39:03.718698   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:39:03.729300   17465 logs.go:276] 2 containers: [0be8d71b93fd fb25a67bf414]
	I0318 04:39:03.729368   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:39:03.739114   17465 logs.go:276] 0 containers: []
	W0318 04:39:03.739126   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:39:03.739183   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:39:03.749422   17465 logs.go:276] 2 containers: [81ae28fc685b 9e2192628e0f]
	I0318 04:39:03.749440   17465 logs.go:123] Gathering logs for kube-proxy [372e2774400e] ...
	I0318 04:39:03.749448   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 372e2774400e"
	I0318 04:39:03.761519   17465 logs.go:123] Gathering logs for kube-controller-manager [0be8d71b93fd] ...
	I0318 04:39:03.761530   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0be8d71b93fd"
	I0318 04:39:03.778709   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:39:03.778719   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:39:03.790749   17465 logs.go:123] Gathering logs for etcd [8b75879fc7bf] ...
	I0318 04:39:03.790762   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b75879fc7bf"
	I0318 04:39:03.805123   17465 logs.go:123] Gathering logs for kube-scheduler [e8b1c1b2cd19] ...
	I0318 04:39:03.805134   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8b1c1b2cd19"
	I0318 04:39:03.816851   17465 logs.go:123] Gathering logs for kube-apiserver [32dc048c4476] ...
	I0318 04:39:03.816861   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dc048c4476"
	I0318 04:39:03.830767   17465 logs.go:123] Gathering logs for kube-scheduler [d579e22e148e] ...
	I0318 04:39:03.830776   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d579e22e148e"
	I0318 04:39:03.846050   17465 logs.go:123] Gathering logs for storage-provisioner [9e2192628e0f] ...
	I0318 04:39:03.846061   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e2192628e0f"
	I0318 04:39:03.857647   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:39:03.857657   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:39:03.882769   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:39:03.882777   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:39:03.887002   17465 logs.go:123] Gathering logs for storage-provisioner [81ae28fc685b] ...
	I0318 04:39:03.887008   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ae28fc685b"
	I0318 04:39:03.898699   17465 logs.go:123] Gathering logs for kube-apiserver [7dacaac7f891] ...
	I0318 04:39:03.898709   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dacaac7f891"
	I0318 04:39:03.935578   17465 logs.go:123] Gathering logs for etcd [8c963566b500] ...
	I0318 04:39:03.935589   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c963566b500"
	I0318 04:39:03.949298   17465 logs.go:123] Gathering logs for coredns [b66f543335d1] ...
	I0318 04:39:03.949308   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b66f543335d1"
	I0318 04:39:03.960688   17465 logs.go:123] Gathering logs for kube-controller-manager [fb25a67bf414] ...
	I0318 04:39:03.960699   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb25a67bf414"
	I0318 04:39:03.973483   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:39:03.973496   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:39:04.012104   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:39:04.012113   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:39:06.232228   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:39:06.232273   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:39:06.550169   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:39:11.234180   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:39:11.234242   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:39:11.551313   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:39:11.551481   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:39:11.570865   17465 logs.go:276] 2 containers: [32dc048c4476 7dacaac7f891]
	I0318 04:39:11.570967   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:39:11.584298   17465 logs.go:276] 2 containers: [8c963566b500 8b75879fc7bf]
	I0318 04:39:11.584373   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:39:11.595453   17465 logs.go:276] 1 containers: [b66f543335d1]
	I0318 04:39:11.595527   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:39:11.606342   17465 logs.go:276] 2 containers: [e8b1c1b2cd19 d579e22e148e]
	I0318 04:39:11.606414   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:39:11.616834   17465 logs.go:276] 1 containers: [372e2774400e]
	I0318 04:39:11.616911   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:39:11.628434   17465 logs.go:276] 2 containers: [0be8d71b93fd fb25a67bf414]
	I0318 04:39:11.628505   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:39:11.638616   17465 logs.go:276] 0 containers: []
	W0318 04:39:11.638625   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:39:11.638682   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:39:11.648976   17465 logs.go:276] 2 containers: [81ae28fc685b 9e2192628e0f]
	I0318 04:39:11.648993   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:39:11.648997   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:39:11.685876   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:39:11.685887   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:39:11.689845   17465 logs.go:123] Gathering logs for coredns [b66f543335d1] ...
	I0318 04:39:11.689853   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b66f543335d1"
	I0318 04:39:11.700562   17465 logs.go:123] Gathering logs for kube-controller-manager [0be8d71b93fd] ...
	I0318 04:39:11.700573   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0be8d71b93fd"
	I0318 04:39:11.717738   17465 logs.go:123] Gathering logs for storage-provisioner [9e2192628e0f] ...
	I0318 04:39:11.717754   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e2192628e0f"
	I0318 04:39:11.728831   17465 logs.go:123] Gathering logs for etcd [8b75879fc7bf] ...
	I0318 04:39:11.728843   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b75879fc7bf"
	I0318 04:39:11.743824   17465 logs.go:123] Gathering logs for kube-scheduler [e8b1c1b2cd19] ...
	I0318 04:39:11.743834   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8b1c1b2cd19"
	I0318 04:39:11.755658   17465 logs.go:123] Gathering logs for kube-scheduler [d579e22e148e] ...
	I0318 04:39:11.755668   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d579e22e148e"
	I0318 04:39:11.771197   17465 logs.go:123] Gathering logs for kube-controller-manager [fb25a67bf414] ...
	I0318 04:39:11.771212   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb25a67bf414"
	I0318 04:39:11.783369   17465 logs.go:123] Gathering logs for storage-provisioner [81ae28fc685b] ...
	I0318 04:39:11.783383   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ae28fc685b"
	I0318 04:39:11.794652   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:39:11.794665   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:39:11.807200   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:39:11.807210   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:39:11.842761   17465 logs.go:123] Gathering logs for kube-apiserver [32dc048c4476] ...
	I0318 04:39:11.842773   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dc048c4476"
	I0318 04:39:11.864962   17465 logs.go:123] Gathering logs for kube-apiserver [7dacaac7f891] ...
	I0318 04:39:11.864974   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dacaac7f891"
	I0318 04:39:11.902372   17465 logs.go:123] Gathering logs for etcd [8c963566b500] ...
	I0318 04:39:11.902384   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c963566b500"
	I0318 04:39:11.916696   17465 logs.go:123] Gathering logs for kube-proxy [372e2774400e] ...
	I0318 04:39:11.916707   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 372e2774400e"
	I0318 04:39:11.932703   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:39:11.932719   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:39:14.458173   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:39:16.234465   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:39:16.234561   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:39:19.460282   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:39:19.460453   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:39:19.475172   17465 logs.go:276] 2 containers: [32dc048c4476 7dacaac7f891]
	I0318 04:39:19.475268   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:39:19.487061   17465 logs.go:276] 2 containers: [8c963566b500 8b75879fc7bf]
	I0318 04:39:19.487135   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:39:19.497666   17465 logs.go:276] 1 containers: [b66f543335d1]
	I0318 04:39:19.497739   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:39:19.508296   17465 logs.go:276] 2 containers: [e8b1c1b2cd19 d579e22e148e]
	I0318 04:39:19.508375   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:39:19.518706   17465 logs.go:276] 1 containers: [372e2774400e]
	I0318 04:39:19.518778   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:39:19.529436   17465 logs.go:276] 2 containers: [0be8d71b93fd fb25a67bf414]
	I0318 04:39:19.529504   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:39:19.549331   17465 logs.go:276] 0 containers: []
	W0318 04:39:19.549343   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:39:19.549407   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:39:19.561322   17465 logs.go:276] 2 containers: [81ae28fc685b 9e2192628e0f]
	I0318 04:39:19.561338   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:39:19.561344   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:39:19.596609   17465 logs.go:123] Gathering logs for kube-scheduler [e8b1c1b2cd19] ...
	I0318 04:39:19.596624   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8b1c1b2cd19"
	I0318 04:39:19.610550   17465 logs.go:123] Gathering logs for etcd [8c963566b500] ...
	I0318 04:39:19.610562   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c963566b500"
	I0318 04:39:19.624489   17465 logs.go:123] Gathering logs for etcd [8b75879fc7bf] ...
	I0318 04:39:19.624501   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b75879fc7bf"
	I0318 04:39:19.639603   17465 logs.go:123] Gathering logs for kube-controller-manager [0be8d71b93fd] ...
	I0318 04:39:19.639615   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0be8d71b93fd"
	I0318 04:39:19.657353   17465 logs.go:123] Gathering logs for storage-provisioner [81ae28fc685b] ...
	I0318 04:39:19.657364   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ae28fc685b"
	I0318 04:39:19.668928   17465 logs.go:123] Gathering logs for storage-provisioner [9e2192628e0f] ...
	I0318 04:39:19.668938   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e2192628e0f"
	I0318 04:39:19.680302   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:39:19.680312   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:39:19.684401   17465 logs.go:123] Gathering logs for kube-apiserver [32dc048c4476] ...
	I0318 04:39:19.684408   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dc048c4476"
	I0318 04:39:19.701785   17465 logs.go:123] Gathering logs for kube-apiserver [7dacaac7f891] ...
	I0318 04:39:19.701795   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dacaac7f891"
	I0318 04:39:21.236949   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:39:21.236981   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:39:19.739906   17465 logs.go:123] Gathering logs for kube-proxy [372e2774400e] ...
	I0318 04:39:19.739917   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 372e2774400e"
	I0318 04:39:19.752306   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:39:19.752319   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:39:19.788915   17465 logs.go:123] Gathering logs for coredns [b66f543335d1] ...
	I0318 04:39:19.788923   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b66f543335d1"
	I0318 04:39:19.802085   17465 logs.go:123] Gathering logs for kube-scheduler [d579e22e148e] ...
	I0318 04:39:19.802096   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d579e22e148e"
	I0318 04:39:19.816633   17465 logs.go:123] Gathering logs for kube-controller-manager [fb25a67bf414] ...
	I0318 04:39:19.816644   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb25a67bf414"
	I0318 04:39:19.828248   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:39:19.828258   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:39:19.852941   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:39:19.852949   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:39:22.367597   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:39:26.239074   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:39:26.239125   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:39:27.369734   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:39:27.369857   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:39:27.382884   17465 logs.go:276] 2 containers: [32dc048c4476 7dacaac7f891]
	I0318 04:39:27.382966   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:39:27.393456   17465 logs.go:276] 2 containers: [8c963566b500 8b75879fc7bf]
	I0318 04:39:27.393530   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:39:27.403454   17465 logs.go:276] 1 containers: [b66f543335d1]
	I0318 04:39:27.403526   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:39:27.414001   17465 logs.go:276] 2 containers: [e8b1c1b2cd19 d579e22e148e]
	I0318 04:39:27.414071   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:39:27.424261   17465 logs.go:276] 1 containers: [372e2774400e]
	I0318 04:39:27.424327   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:39:27.440663   17465 logs.go:276] 2 containers: [0be8d71b93fd fb25a67bf414]
	I0318 04:39:27.440734   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:39:27.450937   17465 logs.go:276] 0 containers: []
	W0318 04:39:27.450956   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:39:27.451020   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:39:27.466540   17465 logs.go:276] 2 containers: [81ae28fc685b 9e2192628e0f]
	I0318 04:39:27.466560   17465 logs.go:123] Gathering logs for kube-apiserver [7dacaac7f891] ...
	I0318 04:39:27.466565   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dacaac7f891"
	I0318 04:39:27.503474   17465 logs.go:123] Gathering logs for kube-controller-manager [0be8d71b93fd] ...
	I0318 04:39:27.503485   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0be8d71b93fd"
	I0318 04:39:27.521206   17465 logs.go:123] Gathering logs for kube-scheduler [d579e22e148e] ...
	I0318 04:39:27.521214   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d579e22e148e"
	I0318 04:39:27.536923   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:39:27.536936   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:39:27.549369   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:39:27.549379   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:39:27.553488   17465 logs.go:123] Gathering logs for kube-apiserver [32dc048c4476] ...
	I0318 04:39:27.553501   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dc048c4476"
	I0318 04:39:27.568491   17465 logs.go:123] Gathering logs for etcd [8c963566b500] ...
	I0318 04:39:27.568503   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c963566b500"
	I0318 04:39:27.582158   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:39:27.582170   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:39:27.619666   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:39:27.619677   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:39:27.656374   17465 logs.go:123] Gathering logs for kube-controller-manager [fb25a67bf414] ...
	I0318 04:39:27.656385   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb25a67bf414"
	I0318 04:39:27.672436   17465 logs.go:123] Gathering logs for kube-proxy [372e2774400e] ...
	I0318 04:39:27.672448   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 372e2774400e"
	I0318 04:39:27.683613   17465 logs.go:123] Gathering logs for storage-provisioner [81ae28fc685b] ...
	I0318 04:39:27.683624   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ae28fc685b"
	I0318 04:39:27.695476   17465 logs.go:123] Gathering logs for storage-provisioner [9e2192628e0f] ...
	I0318 04:39:27.695488   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e2192628e0f"
	I0318 04:39:27.707169   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:39:27.707182   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:39:27.731404   17465 logs.go:123] Gathering logs for etcd [8b75879fc7bf] ...
	I0318 04:39:27.731413   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b75879fc7bf"
	I0318 04:39:27.745856   17465 logs.go:123] Gathering logs for coredns [b66f543335d1] ...
	I0318 04:39:27.745867   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b66f543335d1"
	I0318 04:39:27.757258   17465 logs.go:123] Gathering logs for kube-scheduler [e8b1c1b2cd19] ...
	I0318 04:39:27.757269   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8b1c1b2cd19"
	I0318 04:39:31.241256   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:39:31.241437   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:39:31.252029   17322 logs.go:276] 1 containers: [d454e6154049]
	I0318 04:39:31.252092   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:39:31.262171   17322 logs.go:276] 1 containers: [8046e42578d2]
	I0318 04:39:31.262250   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:39:31.272772   17322 logs.go:276] 2 containers: [367d0316359f 3a24458b86a4]
	I0318 04:39:31.272846   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:39:31.283853   17322 logs.go:276] 1 containers: [894b6a0a0702]
	I0318 04:39:31.283924   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:39:31.294042   17322 logs.go:276] 1 containers: [04d6cdf60161]
	I0318 04:39:31.294142   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:39:31.304914   17322 logs.go:276] 1 containers: [22a920f51952]
	I0318 04:39:31.304986   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:39:31.315434   17322 logs.go:276] 0 containers: []
	W0318 04:39:31.315446   17322 logs.go:278] No container was found matching "kindnet"
	I0318 04:39:31.315505   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:39:31.325872   17322 logs.go:276] 1 containers: [5bfb08f2c96a]
	I0318 04:39:31.325885   17322 logs.go:123] Gathering logs for kubelet ...
	I0318 04:39:31.325890   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 04:39:31.342277   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:39:31.342373   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:39:31.359926   17322 logs.go:123] Gathering logs for dmesg ...
	I0318 04:39:31.359935   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:39:31.364133   17322 logs.go:123] Gathering logs for etcd [8046e42578d2] ...
	I0318 04:39:31.364142   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8046e42578d2"
	I0318 04:39:31.379242   17322 logs.go:123] Gathering logs for coredns [367d0316359f] ...
	I0318 04:39:31.379252   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 367d0316359f"
	I0318 04:39:31.391199   17322 logs.go:123] Gathering logs for kube-scheduler [894b6a0a0702] ...
	I0318 04:39:31.391209   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 894b6a0a0702"
	I0318 04:39:31.405854   17322 logs.go:123] Gathering logs for kube-controller-manager [22a920f51952] ...
	I0318 04:39:31.405868   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22a920f51952"
	I0318 04:39:31.423515   17322 logs.go:123] Gathering logs for Docker ...
	I0318 04:39:31.423525   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:39:31.446939   17322 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:39:31.446948   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:39:31.482387   17322 logs.go:123] Gathering logs for kube-apiserver [d454e6154049] ...
	I0318 04:39:31.482401   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d454e6154049"
	I0318 04:39:31.496896   17322 logs.go:123] Gathering logs for coredns [3a24458b86a4] ...
	I0318 04:39:31.496910   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a24458b86a4"
	I0318 04:39:31.508442   17322 logs.go:123] Gathering logs for kube-proxy [04d6cdf60161] ...
	I0318 04:39:31.508451   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d6cdf60161"
	I0318 04:39:31.519806   17322 logs.go:123] Gathering logs for storage-provisioner [5bfb08f2c96a] ...
	I0318 04:39:31.519815   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bfb08f2c96a"
	I0318 04:39:31.531224   17322 logs.go:123] Gathering logs for container status ...
	I0318 04:39:31.531238   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:39:31.542466   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:39:31.542478   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 04:39:31.542507   17322 out.go:239] X Problems detected in kubelet:
	W0318 04:39:31.542511   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:39:31.542515   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:39:31.542521   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:39:31.542525   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:39:30.269837   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:39:35.272113   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:39:35.272306   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:39:35.285228   17465 logs.go:276] 2 containers: [32dc048c4476 7dacaac7f891]
	I0318 04:39:35.285303   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:39:35.295713   17465 logs.go:276] 2 containers: [8c963566b500 8b75879fc7bf]
	I0318 04:39:35.295788   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:39:35.306133   17465 logs.go:276] 1 containers: [b66f543335d1]
	I0318 04:39:35.306210   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:39:35.316702   17465 logs.go:276] 2 containers: [e8b1c1b2cd19 d579e22e148e]
	I0318 04:39:35.316768   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:39:35.327094   17465 logs.go:276] 1 containers: [372e2774400e]
	I0318 04:39:35.327159   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:39:35.337923   17465 logs.go:276] 2 containers: [0be8d71b93fd fb25a67bf414]
	I0318 04:39:35.337997   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:39:35.348206   17465 logs.go:276] 0 containers: []
	W0318 04:39:35.348219   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:39:35.348278   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:39:35.359073   17465 logs.go:276] 2 containers: [81ae28fc685b 9e2192628e0f]
	I0318 04:39:35.359089   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:39:35.359095   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:39:35.396681   17465 logs.go:123] Gathering logs for kube-scheduler [d579e22e148e] ...
	I0318 04:39:35.396694   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d579e22e148e"
	I0318 04:39:35.411560   17465 logs.go:123] Gathering logs for kube-proxy [372e2774400e] ...
	I0318 04:39:35.411571   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 372e2774400e"
	I0318 04:39:35.422963   17465 logs.go:123] Gathering logs for kube-controller-manager [fb25a67bf414] ...
	I0318 04:39:35.422975   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb25a67bf414"
	I0318 04:39:35.434798   17465 logs.go:123] Gathering logs for storage-provisioner [81ae28fc685b] ...
	I0318 04:39:35.434809   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ae28fc685b"
	I0318 04:39:35.446887   17465 logs.go:123] Gathering logs for kube-apiserver [7dacaac7f891] ...
	I0318 04:39:35.446898   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dacaac7f891"
	I0318 04:39:35.484181   17465 logs.go:123] Gathering logs for etcd [8b75879fc7bf] ...
	I0318 04:39:35.484195   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b75879fc7bf"
	I0318 04:39:35.498358   17465 logs.go:123] Gathering logs for coredns [b66f543335d1] ...
	I0318 04:39:35.498372   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b66f543335d1"
	I0318 04:39:35.509528   17465 logs.go:123] Gathering logs for storage-provisioner [9e2192628e0f] ...
	I0318 04:39:35.509540   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e2192628e0f"
	I0318 04:39:35.525531   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:39:35.525542   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:39:35.537555   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:39:35.537569   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:39:35.579150   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:39:35.579164   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:39:35.604306   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:39:35.604317   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:39:35.608432   17465 logs.go:123] Gathering logs for kube-apiserver [32dc048c4476] ...
	I0318 04:39:35.608440   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dc048c4476"
	I0318 04:39:35.626678   17465 logs.go:123] Gathering logs for etcd [8c963566b500] ...
	I0318 04:39:35.626693   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c963566b500"
	I0318 04:39:35.640899   17465 logs.go:123] Gathering logs for kube-scheduler [e8b1c1b2cd19] ...
	I0318 04:39:35.640910   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8b1c1b2cd19"
	I0318 04:39:35.659958   17465 logs.go:123] Gathering logs for kube-controller-manager [0be8d71b93fd] ...
	I0318 04:39:35.659968   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0be8d71b93fd"
	I0318 04:39:38.182936   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:39:41.545720   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:39:43.185249   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:39:43.185439   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:39:43.203136   17465 logs.go:276] 2 containers: [32dc048c4476 7dacaac7f891]
	I0318 04:39:43.203218   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:39:43.215614   17465 logs.go:276] 2 containers: [8c963566b500 8b75879fc7bf]
	I0318 04:39:43.215690   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:39:43.226937   17465 logs.go:276] 1 containers: [b66f543335d1]
	I0318 04:39:43.227007   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:39:43.240990   17465 logs.go:276] 2 containers: [e8b1c1b2cd19 d579e22e148e]
	I0318 04:39:43.241060   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:39:43.251810   17465 logs.go:276] 1 containers: [372e2774400e]
	I0318 04:39:43.251876   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:39:43.262757   17465 logs.go:276] 2 containers: [0be8d71b93fd fb25a67bf414]
	I0318 04:39:43.262829   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:39:43.273918   17465 logs.go:276] 0 containers: []
	W0318 04:39:43.273932   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:39:43.273999   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:39:43.286288   17465 logs.go:276] 2 containers: [81ae28fc685b 9e2192628e0f]
	I0318 04:39:43.286309   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:39:43.286314   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:39:43.323893   17465 logs.go:123] Gathering logs for kube-scheduler [e8b1c1b2cd19] ...
	I0318 04:39:43.323904   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8b1c1b2cd19"
	I0318 04:39:43.336354   17465 logs.go:123] Gathering logs for kube-proxy [372e2774400e] ...
	I0318 04:39:43.336365   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 372e2774400e"
	I0318 04:39:43.348502   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:39:43.348516   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:39:43.360522   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:39:43.360533   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:39:43.397769   17465 logs.go:123] Gathering logs for etcd [8b75879fc7bf] ...
	I0318 04:39:43.397781   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b75879fc7bf"
	I0318 04:39:43.412598   17465 logs.go:123] Gathering logs for storage-provisioner [9e2192628e0f] ...
	I0318 04:39:43.412608   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e2192628e0f"
	I0318 04:39:43.424563   17465 logs.go:123] Gathering logs for kube-controller-manager [fb25a67bf414] ...
	I0318 04:39:43.424575   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb25a67bf414"
	I0318 04:39:43.436292   17465 logs.go:123] Gathering logs for storage-provisioner [81ae28fc685b] ...
	I0318 04:39:43.436304   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ae28fc685b"
	I0318 04:39:43.448963   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:39:43.448977   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:39:43.473348   17465 logs.go:123] Gathering logs for kube-apiserver [32dc048c4476] ...
	I0318 04:39:43.473357   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dc048c4476"
	I0318 04:39:43.487562   17465 logs.go:123] Gathering logs for kube-apiserver [7dacaac7f891] ...
	I0318 04:39:43.487572   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dacaac7f891"
	I0318 04:39:43.529542   17465 logs.go:123] Gathering logs for etcd [8c963566b500] ...
	I0318 04:39:43.529556   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c963566b500"
	I0318 04:39:43.544022   17465 logs.go:123] Gathering logs for coredns [b66f543335d1] ...
	I0318 04:39:43.544047   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b66f543335d1"
	I0318 04:39:43.559265   17465 logs.go:123] Gathering logs for kube-controller-manager [0be8d71b93fd] ...
	I0318 04:39:43.559276   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0be8d71b93fd"
	I0318 04:39:43.577321   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:39:43.577330   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:39:43.581538   17465 logs.go:123] Gathering logs for kube-scheduler [d579e22e148e] ...
	I0318 04:39:43.581548   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d579e22e148e"
	I0318 04:39:46.547859   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:39:46.548115   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:39:46.572573   17322 logs.go:276] 1 containers: [d454e6154049]
	I0318 04:39:46.572676   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:39:46.588419   17322 logs.go:276] 1 containers: [8046e42578d2]
	I0318 04:39:46.588489   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:39:46.601092   17322 logs.go:276] 2 containers: [367d0316359f 3a24458b86a4]
	I0318 04:39:46.601166   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:39:46.611805   17322 logs.go:276] 1 containers: [894b6a0a0702]
	I0318 04:39:46.611881   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:39:46.622026   17322 logs.go:276] 1 containers: [04d6cdf60161]
	I0318 04:39:46.622104   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:39:46.633297   17322 logs.go:276] 1 containers: [22a920f51952]
	I0318 04:39:46.633366   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:39:46.643450   17322 logs.go:276] 0 containers: []
	W0318 04:39:46.643461   17322 logs.go:278] No container was found matching "kindnet"
	I0318 04:39:46.643521   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:39:46.653669   17322 logs.go:276] 1 containers: [5bfb08f2c96a]
	I0318 04:39:46.653685   17322 logs.go:123] Gathering logs for coredns [367d0316359f] ...
	I0318 04:39:46.653692   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 367d0316359f"
	I0318 04:39:46.665217   17322 logs.go:123] Gathering logs for coredns [3a24458b86a4] ...
	I0318 04:39:46.665228   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a24458b86a4"
	I0318 04:39:46.677135   17322 logs.go:123] Gathering logs for storage-provisioner [5bfb08f2c96a] ...
	I0318 04:39:46.677145   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bfb08f2c96a"
	I0318 04:39:46.688738   17322 logs.go:123] Gathering logs for Docker ...
	I0318 04:39:46.688748   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:39:46.712317   17322 logs.go:123] Gathering logs for container status ...
	I0318 04:39:46.712324   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:39:46.723081   17322 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:39:46.723091   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:39:46.764672   17322 logs.go:123] Gathering logs for etcd [8046e42578d2] ...
	I0318 04:39:46.764682   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8046e42578d2"
	I0318 04:39:46.778975   17322 logs.go:123] Gathering logs for kube-apiserver [d454e6154049] ...
	I0318 04:39:46.778984   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d454e6154049"
	I0318 04:39:46.793092   17322 logs.go:123] Gathering logs for kube-scheduler [894b6a0a0702] ...
	I0318 04:39:46.793102   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 894b6a0a0702"
	I0318 04:39:46.807546   17322 logs.go:123] Gathering logs for kube-proxy [04d6cdf60161] ...
	I0318 04:39:46.807557   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d6cdf60161"
	I0318 04:39:46.823580   17322 logs.go:123] Gathering logs for kube-controller-manager [22a920f51952] ...
	I0318 04:39:46.823592   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22a920f51952"
	I0318 04:39:46.841446   17322 logs.go:123] Gathering logs for kubelet ...
	I0318 04:39:46.841457   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 04:39:46.859220   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:39:46.859314   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:39:46.877128   17322 logs.go:123] Gathering logs for dmesg ...
	I0318 04:39:46.877137   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:39:46.883531   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:39:46.883542   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 04:39:46.883570   17322 out.go:239] X Problems detected in kubelet:
	W0318 04:39:46.883574   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:39:46.883577   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:39:46.883581   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:39:46.883585   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:39:46.096172   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:39:51.098337   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:39:51.098493   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:39:51.115252   17465 logs.go:276] 2 containers: [32dc048c4476 7dacaac7f891]
	I0318 04:39:51.115324   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:39:51.126410   17465 logs.go:276] 2 containers: [8c963566b500 8b75879fc7bf]
	I0318 04:39:51.126486   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:39:51.138756   17465 logs.go:276] 1 containers: [b66f543335d1]
	I0318 04:39:51.138823   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:39:51.149260   17465 logs.go:276] 2 containers: [e8b1c1b2cd19 d579e22e148e]
	I0318 04:39:51.149329   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:39:51.159232   17465 logs.go:276] 1 containers: [372e2774400e]
	I0318 04:39:51.159324   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:39:51.170721   17465 logs.go:276] 2 containers: [0be8d71b93fd fb25a67bf414]
	I0318 04:39:51.170800   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:39:51.180721   17465 logs.go:276] 0 containers: []
	W0318 04:39:51.180733   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:39:51.180795   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:39:51.196628   17465 logs.go:276] 2 containers: [81ae28fc685b 9e2192628e0f]
	I0318 04:39:51.196647   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:39:51.196652   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:39:51.200698   17465 logs.go:123] Gathering logs for storage-provisioner [81ae28fc685b] ...
	I0318 04:39:51.200708   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ae28fc685b"
	I0318 04:39:51.211992   17465 logs.go:123] Gathering logs for coredns [b66f543335d1] ...
	I0318 04:39:51.212007   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b66f543335d1"
	I0318 04:39:51.223619   17465 logs.go:123] Gathering logs for kube-scheduler [d579e22e148e] ...
	I0318 04:39:51.223630   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d579e22e148e"
	I0318 04:39:51.240464   17465 logs.go:123] Gathering logs for kube-apiserver [7dacaac7f891] ...
	I0318 04:39:51.240474   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dacaac7f891"
	I0318 04:39:51.277309   17465 logs.go:123] Gathering logs for etcd [8b75879fc7bf] ...
	I0318 04:39:51.277321   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b75879fc7bf"
	I0318 04:39:51.291757   17465 logs.go:123] Gathering logs for kube-scheduler [e8b1c1b2cd19] ...
	I0318 04:39:51.291769   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8b1c1b2cd19"
	I0318 04:39:51.305129   17465 logs.go:123] Gathering logs for kube-controller-manager [0be8d71b93fd] ...
	I0318 04:39:51.305139   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0be8d71b93fd"
	I0318 04:39:51.322767   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:39:51.322776   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:39:51.358778   17465 logs.go:123] Gathering logs for kube-apiserver [32dc048c4476] ...
	I0318 04:39:51.358793   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dc048c4476"
	I0318 04:39:51.379374   17465 logs.go:123] Gathering logs for kube-proxy [372e2774400e] ...
	I0318 04:39:51.379389   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 372e2774400e"
	I0318 04:39:51.395136   17465 logs.go:123] Gathering logs for kube-controller-manager [fb25a67bf414] ...
	I0318 04:39:51.395146   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb25a67bf414"
	I0318 04:39:51.407413   17465 logs.go:123] Gathering logs for storage-provisioner [9e2192628e0f] ...
	I0318 04:39:51.407427   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e2192628e0f"
	I0318 04:39:51.419998   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:39:51.420007   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:39:51.442734   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:39:51.442742   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:39:51.454532   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:39:51.454546   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:39:51.491175   17465 logs.go:123] Gathering logs for etcd [8c963566b500] ...
	I0318 04:39:51.491185   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c963566b500"
	I0318 04:39:54.006841   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:39:56.887404   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:39:59.008982   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:39:59.009131   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:39:59.021325   17465 logs.go:276] 2 containers: [32dc048c4476 7dacaac7f891]
	I0318 04:39:59.021400   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:39:59.033687   17465 logs.go:276] 2 containers: [8c963566b500 8b75879fc7bf]
	I0318 04:39:59.033756   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:39:59.044231   17465 logs.go:276] 1 containers: [b66f543335d1]
	I0318 04:39:59.044303   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:39:59.055000   17465 logs.go:276] 2 containers: [e8b1c1b2cd19 d579e22e148e]
	I0318 04:39:59.055075   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:39:59.065498   17465 logs.go:276] 1 containers: [372e2774400e]
	I0318 04:39:59.065567   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:39:59.076232   17465 logs.go:276] 2 containers: [0be8d71b93fd fb25a67bf414]
	I0318 04:39:59.076302   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:39:59.086769   17465 logs.go:276] 0 containers: []
	W0318 04:39:59.086780   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:39:59.086839   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:39:59.097564   17465 logs.go:276] 2 containers: [81ae28fc685b 9e2192628e0f]
	I0318 04:39:59.097583   17465 logs.go:123] Gathering logs for kube-scheduler [e8b1c1b2cd19] ...
	I0318 04:39:59.097588   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8b1c1b2cd19"
	I0318 04:39:59.109704   17465 logs.go:123] Gathering logs for kube-controller-manager [fb25a67bf414] ...
	I0318 04:39:59.109717   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb25a67bf414"
	I0318 04:39:59.122009   17465 logs.go:123] Gathering logs for coredns [b66f543335d1] ...
	I0318 04:39:59.122020   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b66f543335d1"
	I0318 04:39:59.134980   17465 logs.go:123] Gathering logs for kube-proxy [372e2774400e] ...
	I0318 04:39:59.134991   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 372e2774400e"
	I0318 04:39:59.146729   17465 logs.go:123] Gathering logs for kube-controller-manager [0be8d71b93fd] ...
	I0318 04:39:59.146740   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0be8d71b93fd"
	I0318 04:39:59.174307   17465 logs.go:123] Gathering logs for storage-provisioner [81ae28fc685b] ...
	I0318 04:39:59.174317   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ae28fc685b"
	I0318 04:39:59.185631   17465 logs.go:123] Gathering logs for storage-provisioner [9e2192628e0f] ...
	I0318 04:39:59.185641   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e2192628e0f"
	I0318 04:39:59.196880   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:39:59.196891   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:39:59.208386   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:39:59.208399   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:39:59.212431   17465 logs.go:123] Gathering logs for kube-apiserver [32dc048c4476] ...
	I0318 04:39:59.212442   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dc048c4476"
	I0318 04:39:59.226127   17465 logs.go:123] Gathering logs for kube-apiserver [7dacaac7f891] ...
	I0318 04:39:59.226137   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dacaac7f891"
	I0318 04:39:59.263280   17465 logs.go:123] Gathering logs for etcd [8b75879fc7bf] ...
	I0318 04:39:59.263295   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b75879fc7bf"
	I0318 04:39:59.277757   17465 logs.go:123] Gathering logs for kube-scheduler [d579e22e148e] ...
	I0318 04:39:59.277767   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d579e22e148e"
	I0318 04:39:59.292493   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:39:59.292503   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:39:59.328968   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:39:59.328977   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:39:59.367128   17465 logs.go:123] Gathering logs for etcd [8c963566b500] ...
	I0318 04:39:59.367140   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c963566b500"
	I0318 04:39:59.385153   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:39:59.385168   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:40:01.889598   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:40:01.889844   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:40:01.913917   17322 logs.go:276] 1 containers: [d454e6154049]
	I0318 04:40:01.914024   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:40:01.930634   17322 logs.go:276] 1 containers: [8046e42578d2]
	I0318 04:40:01.930717   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:40:01.943447   17322 logs.go:276] 2 containers: [367d0316359f 3a24458b86a4]
	I0318 04:40:01.943522   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:40:01.954675   17322 logs.go:276] 1 containers: [894b6a0a0702]
	I0318 04:40:01.954741   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:40:01.965030   17322 logs.go:276] 1 containers: [04d6cdf60161]
	I0318 04:40:01.965106   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:40:01.975254   17322 logs.go:276] 1 containers: [22a920f51952]
	I0318 04:40:01.975322   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:40:01.985083   17322 logs.go:276] 0 containers: []
	W0318 04:40:01.985095   17322 logs.go:278] No container was found matching "kindnet"
	I0318 04:40:01.985155   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:40:01.995400   17322 logs.go:276] 1 containers: [5bfb08f2c96a]
	I0318 04:40:01.995415   17322 logs.go:123] Gathering logs for kubelet ...
	I0318 04:40:01.995420   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 04:40:02.011919   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:40:02.012019   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:40:02.029326   17322 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:40:02.029334   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:40:02.062811   17322 logs.go:123] Gathering logs for coredns [367d0316359f] ...
	I0318 04:40:02.062825   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 367d0316359f"
	I0318 04:40:02.073875   17322 logs.go:123] Gathering logs for kube-scheduler [894b6a0a0702] ...
	I0318 04:40:02.073888   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 894b6a0a0702"
	I0318 04:40:02.088793   17322 logs.go:123] Gathering logs for kube-controller-manager [22a920f51952] ...
	I0318 04:40:02.088803   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22a920f51952"
	I0318 04:40:02.108110   17322 logs.go:123] Gathering logs for Docker ...
	I0318 04:40:02.108121   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:40:02.132105   17322 logs.go:123] Gathering logs for container status ...
	I0318 04:40:02.132114   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:40:02.144546   17322 logs.go:123] Gathering logs for dmesg ...
	I0318 04:40:02.144560   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:40:02.149369   17322 logs.go:123] Gathering logs for kube-apiserver [d454e6154049] ...
	I0318 04:40:02.149379   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d454e6154049"
	I0318 04:40:02.163282   17322 logs.go:123] Gathering logs for etcd [8046e42578d2] ...
	I0318 04:40:02.163293   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8046e42578d2"
	I0318 04:40:01.910622   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:40:02.177012   17322 logs.go:123] Gathering logs for coredns [3a24458b86a4] ...
	I0318 04:40:02.180945   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a24458b86a4"
	I0318 04:40:02.192241   17322 logs.go:123] Gathering logs for kube-proxy [04d6cdf60161] ...
	I0318 04:40:02.192252   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d6cdf60161"
	I0318 04:40:02.203922   17322 logs.go:123] Gathering logs for storage-provisioner [5bfb08f2c96a] ...
	I0318 04:40:02.203933   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bfb08f2c96a"
	I0318 04:40:02.215592   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:40:02.215604   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 04:40:02.215629   17322 out.go:239] X Problems detected in kubelet:
	W0318 04:40:02.215633   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:40:02.215668   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:40:02.215675   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:40:02.215680   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:40:06.912781   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:40:06.913064   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:40:06.937019   17465 logs.go:276] 2 containers: [32dc048c4476 7dacaac7f891]
	I0318 04:40:06.937116   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:40:06.951678   17465 logs.go:276] 2 containers: [8c963566b500 8b75879fc7bf]
	I0318 04:40:06.951765   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:40:06.963661   17465 logs.go:276] 1 containers: [b66f543335d1]
	I0318 04:40:06.963733   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:40:06.974819   17465 logs.go:276] 2 containers: [e8b1c1b2cd19 d579e22e148e]
	I0318 04:40:06.974883   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:40:06.985310   17465 logs.go:276] 1 containers: [372e2774400e]
	I0318 04:40:06.985383   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:40:06.995916   17465 logs.go:276] 2 containers: [0be8d71b93fd fb25a67bf414]
	I0318 04:40:06.995980   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:40:07.006395   17465 logs.go:276] 0 containers: []
	W0318 04:40:07.006406   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:40:07.006458   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:40:07.020942   17465 logs.go:276] 2 containers: [81ae28fc685b 9e2192628e0f]
	I0318 04:40:07.020963   17465 logs.go:123] Gathering logs for storage-provisioner [81ae28fc685b] ...
	I0318 04:40:07.020970   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ae28fc685b"
	I0318 04:40:07.032296   17465 logs.go:123] Gathering logs for storage-provisioner [9e2192628e0f] ...
	I0318 04:40:07.032308   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e2192628e0f"
	I0318 04:40:07.046063   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:40:07.046077   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:40:07.060341   17465 logs.go:123] Gathering logs for kube-apiserver [32dc048c4476] ...
	I0318 04:40:07.060352   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dc048c4476"
	I0318 04:40:07.074459   17465 logs.go:123] Gathering logs for kube-apiserver [7dacaac7f891] ...
	I0318 04:40:07.074469   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dacaac7f891"
	I0318 04:40:07.112179   17465 logs.go:123] Gathering logs for kube-controller-manager [fb25a67bf414] ...
	I0318 04:40:07.112194   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb25a67bf414"
	I0318 04:40:07.124398   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:40:07.124411   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:40:07.147584   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:40:07.147594   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:40:07.183885   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:40:07.183894   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:40:07.187745   17465 logs.go:123] Gathering logs for kube-scheduler [e8b1c1b2cd19] ...
	I0318 04:40:07.187754   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8b1c1b2cd19"
	I0318 04:40:07.199173   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:40:07.199187   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:40:07.236936   17465 logs.go:123] Gathering logs for coredns [b66f543335d1] ...
	I0318 04:40:07.236946   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b66f543335d1"
	I0318 04:40:07.248537   17465 logs.go:123] Gathering logs for kube-proxy [372e2774400e] ...
	I0318 04:40:07.248549   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 372e2774400e"
	I0318 04:40:07.260163   17465 logs.go:123] Gathering logs for kube-controller-manager [0be8d71b93fd] ...
	I0318 04:40:07.260173   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0be8d71b93fd"
	I0318 04:40:07.283437   17465 logs.go:123] Gathering logs for etcd [8c963566b500] ...
	I0318 04:40:07.283446   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c963566b500"
	I0318 04:40:07.297712   17465 logs.go:123] Gathering logs for etcd [8b75879fc7bf] ...
	I0318 04:40:07.297722   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b75879fc7bf"
	I0318 04:40:07.312506   17465 logs.go:123] Gathering logs for kube-scheduler [d579e22e148e] ...
	I0318 04:40:07.312517   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d579e22e148e"
	I0318 04:40:09.829401   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:40:12.218358   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:40:14.829651   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:40:14.829828   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:40:14.847239   17465 logs.go:276] 2 containers: [32dc048c4476 7dacaac7f891]
	I0318 04:40:14.847328   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:40:14.860184   17465 logs.go:276] 2 containers: [8c963566b500 8b75879fc7bf]
	I0318 04:40:14.860255   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:40:14.871428   17465 logs.go:276] 1 containers: [b66f543335d1]
	I0318 04:40:14.871494   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:40:14.881741   17465 logs.go:276] 2 containers: [e8b1c1b2cd19 d579e22e148e]
	I0318 04:40:14.881820   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:40:14.892462   17465 logs.go:276] 1 containers: [372e2774400e]
	I0318 04:40:14.892534   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:40:14.904570   17465 logs.go:276] 2 containers: [0be8d71b93fd fb25a67bf414]
	I0318 04:40:14.904639   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:40:14.914839   17465 logs.go:276] 0 containers: []
	W0318 04:40:14.914849   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:40:14.914903   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:40:14.932533   17465 logs.go:276] 2 containers: [81ae28fc685b 9e2192628e0f]
	I0318 04:40:14.932552   17465 logs.go:123] Gathering logs for kube-scheduler [d579e22e148e] ...
	I0318 04:40:14.932557   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d579e22e148e"
	I0318 04:40:14.951923   17465 logs.go:123] Gathering logs for kube-controller-manager [0be8d71b93fd] ...
	I0318 04:40:14.951933   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0be8d71b93fd"
	I0318 04:40:14.969899   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:40:14.969911   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:40:14.982310   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:40:14.982325   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:40:15.015567   17465 logs.go:123] Gathering logs for kube-scheduler [e8b1c1b2cd19] ...
	I0318 04:40:15.015582   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8b1c1b2cd19"
	I0318 04:40:15.027819   17465 logs.go:123] Gathering logs for etcd [8c963566b500] ...
	I0318 04:40:15.027829   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c963566b500"
	I0318 04:40:15.041656   17465 logs.go:123] Gathering logs for etcd [8b75879fc7bf] ...
	I0318 04:40:15.041666   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b75879fc7bf"
	I0318 04:40:15.056214   17465 logs.go:123] Gathering logs for coredns [b66f543335d1] ...
	I0318 04:40:15.056225   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b66f543335d1"
	I0318 04:40:15.069192   17465 logs.go:123] Gathering logs for kube-proxy [372e2774400e] ...
	I0318 04:40:15.069206   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 372e2774400e"
	I0318 04:40:15.090553   17465 logs.go:123] Gathering logs for storage-provisioner [81ae28fc685b] ...
	I0318 04:40:15.090563   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ae28fc685b"
	I0318 04:40:15.106878   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:40:15.106890   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:40:15.111282   17465 logs.go:123] Gathering logs for kube-apiserver [32dc048c4476] ...
	I0318 04:40:15.111290   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dc048c4476"
	I0318 04:40:15.126367   17465 logs.go:123] Gathering logs for kube-controller-manager [fb25a67bf414] ...
	I0318 04:40:15.126377   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb25a67bf414"
	I0318 04:40:15.138556   17465 logs.go:123] Gathering logs for storage-provisioner [9e2192628e0f] ...
	I0318 04:40:15.138567   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e2192628e0f"
	I0318 04:40:15.150234   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:40:15.150246   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:40:15.173262   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:40:15.173270   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:40:15.209522   17465 logs.go:123] Gathering logs for kube-apiserver [7dacaac7f891] ...
	I0318 04:40:15.209529   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dacaac7f891"
	I0318 04:40:17.749774   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:40:17.220813   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:40:17.221017   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:40:17.235323   17322 logs.go:276] 1 containers: [d454e6154049]
	I0318 04:40:17.235404   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:40:17.251235   17322 logs.go:276] 1 containers: [8046e42578d2]
	I0318 04:40:17.251307   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:40:17.262340   17322 logs.go:276] 2 containers: [367d0316359f 3a24458b86a4]
	I0318 04:40:17.262409   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:40:17.272659   17322 logs.go:276] 1 containers: [894b6a0a0702]
	I0318 04:40:17.272731   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:40:17.286373   17322 logs.go:276] 1 containers: [04d6cdf60161]
	I0318 04:40:17.286444   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:40:17.296848   17322 logs.go:276] 1 containers: [22a920f51952]
	I0318 04:40:17.296914   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:40:17.306799   17322 logs.go:276] 0 containers: []
	W0318 04:40:17.306809   17322 logs.go:278] No container was found matching "kindnet"
	I0318 04:40:17.306869   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:40:17.317130   17322 logs.go:276] 1 containers: [5bfb08f2c96a]
	I0318 04:40:17.317145   17322 logs.go:123] Gathering logs for Docker ...
	I0318 04:40:17.317150   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:40:17.341295   17322 logs.go:123] Gathering logs for kubelet ...
	I0318 04:40:17.341305   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 04:40:17.358657   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:40:17.358752   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:40:17.376375   17322 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:40:17.376383   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:40:17.410087   17322 logs.go:123] Gathering logs for coredns [3a24458b86a4] ...
	I0318 04:40:17.410097   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a24458b86a4"
	I0318 04:40:17.421570   17322 logs.go:123] Gathering logs for kube-controller-manager [22a920f51952] ...
	I0318 04:40:17.421584   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22a920f51952"
	I0318 04:40:17.442847   17322 logs.go:123] Gathering logs for storage-provisioner [5bfb08f2c96a] ...
	I0318 04:40:17.442859   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bfb08f2c96a"
	I0318 04:40:17.457037   17322 logs.go:123] Gathering logs for kube-proxy [04d6cdf60161] ...
	I0318 04:40:17.457049   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d6cdf60161"
	I0318 04:40:17.468715   17322 logs.go:123] Gathering logs for container status ...
	I0318 04:40:17.468728   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:40:17.485232   17322 logs.go:123] Gathering logs for dmesg ...
	I0318 04:40:17.485241   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:40:17.489755   17322 logs.go:123] Gathering logs for kube-apiserver [d454e6154049] ...
	I0318 04:40:17.489763   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d454e6154049"
	I0318 04:40:17.503520   17322 logs.go:123] Gathering logs for etcd [8046e42578d2] ...
	I0318 04:40:17.503529   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8046e42578d2"
	I0318 04:40:17.518167   17322 logs.go:123] Gathering logs for coredns [367d0316359f] ...
	I0318 04:40:17.518177   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 367d0316359f"
	I0318 04:40:17.529610   17322 logs.go:123] Gathering logs for kube-scheduler [894b6a0a0702] ...
	I0318 04:40:17.529622   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 894b6a0a0702"
	I0318 04:40:17.548129   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:40:17.548140   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 04:40:17.548165   17322 out.go:239] X Problems detected in kubelet:
	W0318 04:40:17.548169   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:40:17.548174   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:40:17.548177   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:40:17.548198   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:40:22.751529   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:40:22.751758   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:40:22.774091   17465 logs.go:276] 2 containers: [32dc048c4476 7dacaac7f891]
	I0318 04:40:22.774193   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:40:22.788718   17465 logs.go:276] 2 containers: [8c963566b500 8b75879fc7bf]
	I0318 04:40:22.788798   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:40:22.801016   17465 logs.go:276] 1 containers: [b66f543335d1]
	I0318 04:40:22.801091   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:40:22.812106   17465 logs.go:276] 2 containers: [e8b1c1b2cd19 d579e22e148e]
	I0318 04:40:22.812177   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:40:22.823784   17465 logs.go:276] 1 containers: [372e2774400e]
	I0318 04:40:22.823858   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:40:22.836262   17465 logs.go:276] 2 containers: [0be8d71b93fd fb25a67bf414]
	I0318 04:40:22.836334   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:40:22.848197   17465 logs.go:276] 0 containers: []
	W0318 04:40:22.848208   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:40:22.848265   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:40:22.859911   17465 logs.go:276] 2 containers: [81ae28fc685b 9e2192628e0f]
	I0318 04:40:22.859929   17465 logs.go:123] Gathering logs for etcd [8b75879fc7bf] ...
	I0318 04:40:22.859934   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b75879fc7bf"
	I0318 04:40:22.874021   17465 logs.go:123] Gathering logs for kube-scheduler [d579e22e148e] ...
	I0318 04:40:22.874032   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d579e22e148e"
	I0318 04:40:22.891334   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:40:22.891344   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:40:22.915017   17465 logs.go:123] Gathering logs for kube-apiserver [32dc048c4476] ...
	I0318 04:40:22.915026   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dc048c4476"
	I0318 04:40:22.928917   17465 logs.go:123] Gathering logs for etcd [8c963566b500] ...
	I0318 04:40:22.928927   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c963566b500"
	I0318 04:40:22.943336   17465 logs.go:123] Gathering logs for kube-controller-manager [0be8d71b93fd] ...
	I0318 04:40:22.943346   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0be8d71b93fd"
	I0318 04:40:22.961829   17465 logs.go:123] Gathering logs for coredns [b66f543335d1] ...
	I0318 04:40:22.961839   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b66f543335d1"
	I0318 04:40:22.978740   17465 logs.go:123] Gathering logs for kube-proxy [372e2774400e] ...
	I0318 04:40:22.978750   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 372e2774400e"
	I0318 04:40:22.990429   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:40:22.990441   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:40:23.025207   17465 logs.go:123] Gathering logs for kube-controller-manager [fb25a67bf414] ...
	I0318 04:40:23.025217   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb25a67bf414"
	I0318 04:40:23.036736   17465 logs.go:123] Gathering logs for kube-apiserver [7dacaac7f891] ...
	I0318 04:40:23.036748   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dacaac7f891"
	I0318 04:40:23.075334   17465 logs.go:123] Gathering logs for kube-scheduler [e8b1c1b2cd19] ...
	I0318 04:40:23.075345   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8b1c1b2cd19"
	I0318 04:40:23.087399   17465 logs.go:123] Gathering logs for storage-provisioner [81ae28fc685b] ...
	I0318 04:40:23.087411   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ae28fc685b"
	I0318 04:40:23.103068   17465 logs.go:123] Gathering logs for storage-provisioner [9e2192628e0f] ...
	I0318 04:40:23.103078   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e2192628e0f"
	I0318 04:40:23.115087   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:40:23.115099   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:40:23.126910   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:40:23.126923   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:40:23.167041   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:40:23.167055   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:40:25.673367   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:40:27.550715   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:40:30.675548   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:40:30.675807   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:40:30.701153   17465 logs.go:276] 2 containers: [32dc048c4476 7dacaac7f891]
	I0318 04:40:30.701285   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:40:30.718522   17465 logs.go:276] 2 containers: [8c963566b500 8b75879fc7bf]
	I0318 04:40:30.718612   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:40:30.732450   17465 logs.go:276] 1 containers: [b66f543335d1]
	I0318 04:40:30.732531   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:40:30.745457   17465 logs.go:276] 2 containers: [e8b1c1b2cd19 d579e22e148e]
	I0318 04:40:30.745529   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:40:30.764110   17465 logs.go:276] 1 containers: [372e2774400e]
	I0318 04:40:30.764183   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:40:30.775963   17465 logs.go:276] 2 containers: [0be8d71b93fd fb25a67bf414]
	I0318 04:40:30.776034   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:40:30.790856   17465 logs.go:276] 0 containers: []
	W0318 04:40:30.790868   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:40:30.790926   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:40:30.810237   17465 logs.go:276] 2 containers: [81ae28fc685b 9e2192628e0f]
	I0318 04:40:30.810255   17465 logs.go:123] Gathering logs for kube-apiserver [32dc048c4476] ...
	I0318 04:40:30.810261   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dc048c4476"
	I0318 04:40:30.825297   17465 logs.go:123] Gathering logs for coredns [b66f543335d1] ...
	I0318 04:40:30.825307   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b66f543335d1"
	I0318 04:40:30.836966   17465 logs.go:123] Gathering logs for kube-scheduler [e8b1c1b2cd19] ...
	I0318 04:40:30.836978   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8b1c1b2cd19"
	I0318 04:40:30.849043   17465 logs.go:123] Gathering logs for storage-provisioner [81ae28fc685b] ...
	I0318 04:40:30.849054   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ae28fc685b"
	I0318 04:40:30.860738   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:40:30.860748   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:40:30.884001   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:40:30.884015   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:40:30.896430   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:40:30.896446   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:40:30.900824   17465 logs.go:123] Gathering logs for kube-scheduler [d579e22e148e] ...
	I0318 04:40:30.900833   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d579e22e148e"
	I0318 04:40:30.916112   17465 logs.go:123] Gathering logs for kube-proxy [372e2774400e] ...
	I0318 04:40:30.916126   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 372e2774400e"
	I0318 04:40:30.928832   17465 logs.go:123] Gathering logs for kube-controller-manager [0be8d71b93fd] ...
	I0318 04:40:30.928843   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0be8d71b93fd"
	I0318 04:40:30.946917   17465 logs.go:123] Gathering logs for kube-controller-manager [fb25a67bf414] ...
	I0318 04:40:30.946927   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb25a67bf414"
	I0318 04:40:30.959067   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:40:30.959077   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:40:30.997334   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:40:30.997347   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:40:31.036446   17465 logs.go:123] Gathering logs for kube-apiserver [7dacaac7f891] ...
	I0318 04:40:31.036460   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dacaac7f891"
	I0318 04:40:31.076596   17465 logs.go:123] Gathering logs for etcd [8c963566b500] ...
	I0318 04:40:31.076610   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c963566b500"
	I0318 04:40:31.094227   17465 logs.go:123] Gathering logs for etcd [8b75879fc7bf] ...
	I0318 04:40:31.094241   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b75879fc7bf"
	I0318 04:40:31.108865   17465 logs.go:123] Gathering logs for storage-provisioner [9e2192628e0f] ...
	I0318 04:40:31.108876   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e2192628e0f"
	I0318 04:40:33.622735   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:40:32.553217   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:40:32.553524   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:40:32.589462   17322 logs.go:276] 1 containers: [d454e6154049]
	I0318 04:40:32.589594   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:40:32.606174   17322 logs.go:276] 1 containers: [8046e42578d2]
	I0318 04:40:32.606255   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:40:32.619397   17322 logs.go:276] 4 containers: [0086537aa016 0a040eebb706 367d0316359f 3a24458b86a4]
	I0318 04:40:32.619479   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:40:32.630191   17322 logs.go:276] 1 containers: [894b6a0a0702]
	I0318 04:40:32.630262   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:40:32.640689   17322 logs.go:276] 1 containers: [04d6cdf60161]
	I0318 04:40:32.640756   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:40:32.650713   17322 logs.go:276] 1 containers: [22a920f51952]
	I0318 04:40:32.650782   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:40:32.661815   17322 logs.go:276] 0 containers: []
	W0318 04:40:32.661828   17322 logs.go:278] No container was found matching "kindnet"
	I0318 04:40:32.661886   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:40:32.673249   17322 logs.go:276] 1 containers: [5bfb08f2c96a]
	I0318 04:40:32.673267   17322 logs.go:123] Gathering logs for etcd [8046e42578d2] ...
	I0318 04:40:32.673272   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8046e42578d2"
	I0318 04:40:32.687168   17322 logs.go:123] Gathering logs for coredns [0086537aa016] ...
	I0318 04:40:32.687181   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0086537aa016"
	I0318 04:40:32.701311   17322 logs.go:123] Gathering logs for coredns [0a040eebb706] ...
	I0318 04:40:32.701323   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a040eebb706"
	I0318 04:40:32.712254   17322 logs.go:123] Gathering logs for coredns [3a24458b86a4] ...
	I0318 04:40:32.712264   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a24458b86a4"
	I0318 04:40:32.726593   17322 logs.go:123] Gathering logs for Docker ...
	I0318 04:40:32.726604   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:40:32.752244   17322 logs.go:123] Gathering logs for kubelet ...
	I0318 04:40:32.752253   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 04:40:32.771893   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:40:32.771992   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:40:32.789712   17322 logs.go:123] Gathering logs for kube-apiserver [d454e6154049] ...
	I0318 04:40:32.789731   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d454e6154049"
	I0318 04:40:32.804336   17322 logs.go:123] Gathering logs for kube-controller-manager [22a920f51952] ...
	I0318 04:40:32.804347   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22a920f51952"
	I0318 04:40:32.822545   17322 logs.go:123] Gathering logs for storage-provisioner [5bfb08f2c96a] ...
	I0318 04:40:32.822556   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bfb08f2c96a"
	I0318 04:40:32.834058   17322 logs.go:123] Gathering logs for dmesg ...
	I0318 04:40:32.834069   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:40:32.839222   17322 logs.go:123] Gathering logs for kube-scheduler [894b6a0a0702] ...
	I0318 04:40:32.839229   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 894b6a0a0702"
	I0318 04:40:32.853547   17322 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:40:32.853556   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:40:32.891455   17322 logs.go:123] Gathering logs for coredns [367d0316359f] ...
	I0318 04:40:32.891469   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 367d0316359f"
	I0318 04:40:32.903348   17322 logs.go:123] Gathering logs for kube-proxy [04d6cdf60161] ...
	I0318 04:40:32.903359   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d6cdf60161"
	I0318 04:40:32.915413   17322 logs.go:123] Gathering logs for container status ...
	I0318 04:40:32.915424   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:40:32.926837   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:40:32.926847   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 04:40:32.926873   17322 out.go:239] X Problems detected in kubelet:
	W0318 04:40:32.926877   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:40:32.926893   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:40:32.926900   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:40:32.926906   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:40:38.624894   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:40:38.625144   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:40:38.649522   17465 logs.go:276] 2 containers: [32dc048c4476 7dacaac7f891]
	I0318 04:40:38.649647   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:40:38.664797   17465 logs.go:276] 2 containers: [8c963566b500 8b75879fc7bf]
	I0318 04:40:38.664872   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:40:38.681419   17465 logs.go:276] 1 containers: [b66f543335d1]
	I0318 04:40:38.681493   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:40:38.692238   17465 logs.go:276] 2 containers: [e8b1c1b2cd19 d579e22e148e]
	I0318 04:40:38.692313   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:40:38.703267   17465 logs.go:276] 1 containers: [372e2774400e]
	I0318 04:40:38.703341   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:40:38.718466   17465 logs.go:276] 2 containers: [0be8d71b93fd fb25a67bf414]
	I0318 04:40:38.718536   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:40:38.728822   17465 logs.go:276] 0 containers: []
	W0318 04:40:38.728834   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:40:38.728897   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:40:38.738979   17465 logs.go:276] 2 containers: [81ae28fc685b 9e2192628e0f]
	I0318 04:40:38.738998   17465 logs.go:123] Gathering logs for storage-provisioner [9e2192628e0f] ...
	I0318 04:40:38.739007   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e2192628e0f"
	I0318 04:40:38.753834   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:40:38.753846   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:40:38.766250   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:40:38.766263   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:40:38.805460   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:40:38.805473   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:40:38.844298   17465 logs.go:123] Gathering logs for etcd [8b75879fc7bf] ...
	I0318 04:40:38.844310   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b75879fc7bf"
	I0318 04:40:38.863524   17465 logs.go:123] Gathering logs for kube-scheduler [e8b1c1b2cd19] ...
	I0318 04:40:38.863535   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8b1c1b2cd19"
	I0318 04:40:38.875483   17465 logs.go:123] Gathering logs for storage-provisioner [81ae28fc685b] ...
	I0318 04:40:38.875494   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ae28fc685b"
	I0318 04:40:38.890477   17465 logs.go:123] Gathering logs for etcd [8c963566b500] ...
	I0318 04:40:38.890489   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c963566b500"
	I0318 04:40:38.905362   17465 logs.go:123] Gathering logs for kube-apiserver [7dacaac7f891] ...
	I0318 04:40:38.905376   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dacaac7f891"
	I0318 04:40:38.944056   17465 logs.go:123] Gathering logs for coredns [b66f543335d1] ...
	I0318 04:40:38.944072   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b66f543335d1"
	I0318 04:40:38.960434   17465 logs.go:123] Gathering logs for kube-scheduler [d579e22e148e] ...
	I0318 04:40:38.960446   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d579e22e148e"
	I0318 04:40:38.975899   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:40:38.975912   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:40:39.000035   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:40:39.000044   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:40:39.004293   17465 logs.go:123] Gathering logs for kube-apiserver [32dc048c4476] ...
	I0318 04:40:39.004302   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dc048c4476"
	I0318 04:40:39.018149   17465 logs.go:123] Gathering logs for kube-proxy [372e2774400e] ...
	I0318 04:40:39.018160   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 372e2774400e"
	I0318 04:40:39.032157   17465 logs.go:123] Gathering logs for kube-controller-manager [0be8d71b93fd] ...
	I0318 04:40:39.032171   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0be8d71b93fd"
	I0318 04:40:39.054366   17465 logs.go:123] Gathering logs for kube-controller-manager [fb25a67bf414] ...
	I0318 04:40:39.054377   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb25a67bf414"
	I0318 04:40:41.568209   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:40:42.930709   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:40:46.570541   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:40:46.570919   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:40:46.599767   17465 logs.go:276] 2 containers: [32dc048c4476 7dacaac7f891]
	I0318 04:40:46.599896   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:40:46.618629   17465 logs.go:276] 2 containers: [8c963566b500 8b75879fc7bf]
	I0318 04:40:46.618717   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:40:46.631609   17465 logs.go:276] 1 containers: [b66f543335d1]
	I0318 04:40:46.631680   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:40:46.643294   17465 logs.go:276] 2 containers: [e8b1c1b2cd19 d579e22e148e]
	I0318 04:40:46.643359   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:40:46.653452   17465 logs.go:276] 1 containers: [372e2774400e]
	I0318 04:40:46.653515   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:40:46.664514   17465 logs.go:276] 2 containers: [0be8d71b93fd fb25a67bf414]
	I0318 04:40:46.664573   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:40:46.675220   17465 logs.go:276] 0 containers: []
	W0318 04:40:46.675232   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:40:46.675287   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:40:46.685779   17465 logs.go:276] 2 containers: [81ae28fc685b 9e2192628e0f]
	I0318 04:40:46.685796   17465 logs.go:123] Gathering logs for kube-apiserver [32dc048c4476] ...
	I0318 04:40:46.685801   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dc048c4476"
	I0318 04:40:46.699763   17465 logs.go:123] Gathering logs for etcd [8c963566b500] ...
	I0318 04:40:46.699774   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c963566b500"
	I0318 04:40:46.722872   17465 logs.go:123] Gathering logs for etcd [8b75879fc7bf] ...
	I0318 04:40:46.722886   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b75879fc7bf"
	I0318 04:40:46.737354   17465 logs.go:123] Gathering logs for kube-scheduler [e8b1c1b2cd19] ...
	I0318 04:40:46.737366   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8b1c1b2cd19"
	I0318 04:40:46.749175   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:40:46.749184   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:40:46.772119   17465 logs.go:123] Gathering logs for kube-apiserver [7dacaac7f891] ...
	I0318 04:40:46.772127   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dacaac7f891"
	I0318 04:40:46.809008   17465 logs.go:123] Gathering logs for coredns [b66f543335d1] ...
	I0318 04:40:46.809024   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b66f543335d1"
	I0318 04:40:46.821692   17465 logs.go:123] Gathering logs for storage-provisioner [81ae28fc685b] ...
	I0318 04:40:46.821703   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ae28fc685b"
	I0318 04:40:46.833365   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:40:46.833376   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:40:46.838174   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:40:46.838181   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:40:46.875300   17465 logs.go:123] Gathering logs for kube-controller-manager [fb25a67bf414] ...
	I0318 04:40:46.875311   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb25a67bf414"
	I0318 04:40:46.887463   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:40:46.887473   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:40:46.926736   17465 logs.go:123] Gathering logs for kube-scheduler [d579e22e148e] ...
	I0318 04:40:46.926746   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d579e22e148e"
	I0318 04:40:46.942068   17465 logs.go:123] Gathering logs for kube-proxy [372e2774400e] ...
	I0318 04:40:46.942080   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 372e2774400e"
	I0318 04:40:46.954310   17465 logs.go:123] Gathering logs for kube-controller-manager [0be8d71b93fd] ...
	I0318 04:40:46.954320   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0be8d71b93fd"
	I0318 04:40:46.971722   17465 logs.go:123] Gathering logs for storage-provisioner [9e2192628e0f] ...
	I0318 04:40:46.971735   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e2192628e0f"
	I0318 04:40:46.982902   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:40:46.982917   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:40:49.496071   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:40:47.932970   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:40:47.933324   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:40:47.966677   17322 logs.go:276] 1 containers: [d454e6154049]
	I0318 04:40:47.966806   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:40:47.987532   17322 logs.go:276] 1 containers: [8046e42578d2]
	I0318 04:40:47.987633   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:40:48.002381   17322 logs.go:276] 4 containers: [0086537aa016 0a040eebb706 367d0316359f 3a24458b86a4]
	I0318 04:40:48.002463   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:40:48.014793   17322 logs.go:276] 1 containers: [894b6a0a0702]
	I0318 04:40:48.014866   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:40:48.031467   17322 logs.go:276] 1 containers: [04d6cdf60161]
	I0318 04:40:48.031536   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:40:48.042748   17322 logs.go:276] 1 containers: [22a920f51952]
	I0318 04:40:48.042825   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:40:48.052822   17322 logs.go:276] 0 containers: []
	W0318 04:40:48.052833   17322 logs.go:278] No container was found matching "kindnet"
	I0318 04:40:48.052894   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:40:48.063286   17322 logs.go:276] 1 containers: [5bfb08f2c96a]
	I0318 04:40:48.063309   17322 logs.go:123] Gathering logs for coredns [0086537aa016] ...
	I0318 04:40:48.063315   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0086537aa016"
	I0318 04:40:48.075406   17322 logs.go:123] Gathering logs for coredns [0a040eebb706] ...
	I0318 04:40:48.075416   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a040eebb706"
	I0318 04:40:48.086782   17322 logs.go:123] Gathering logs for coredns [367d0316359f] ...
	I0318 04:40:48.086791   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 367d0316359f"
	I0318 04:40:48.099958   17322 logs.go:123] Gathering logs for kube-controller-manager [22a920f51952] ...
	I0318 04:40:48.099971   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22a920f51952"
	I0318 04:40:48.118191   17322 logs.go:123] Gathering logs for storage-provisioner [5bfb08f2c96a] ...
	I0318 04:40:48.118201   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bfb08f2c96a"
	I0318 04:40:48.130217   17322 logs.go:123] Gathering logs for etcd [8046e42578d2] ...
	I0318 04:40:48.130226   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8046e42578d2"
	I0318 04:40:48.145260   17322 logs.go:123] Gathering logs for Docker ...
	I0318 04:40:48.145269   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:40:48.169652   17322 logs.go:123] Gathering logs for container status ...
	I0318 04:40:48.169659   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:40:48.181721   17322 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:40:48.181731   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:40:48.217836   17322 logs.go:123] Gathering logs for kube-apiserver [d454e6154049] ...
	I0318 04:40:48.217847   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d454e6154049"
	I0318 04:40:48.232343   17322 logs.go:123] Gathering logs for kube-scheduler [894b6a0a0702] ...
	I0318 04:40:48.232352   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 894b6a0a0702"
	I0318 04:40:48.248530   17322 logs.go:123] Gathering logs for kube-proxy [04d6cdf60161] ...
	I0318 04:40:48.248538   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d6cdf60161"
	I0318 04:40:48.261552   17322 logs.go:123] Gathering logs for kubelet ...
	I0318 04:40:48.261563   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 04:40:48.279133   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:40:48.279226   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:40:48.296366   17322 logs.go:123] Gathering logs for dmesg ...
	I0318 04:40:48.296373   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:40:48.301115   17322 logs.go:123] Gathering logs for coredns [3a24458b86a4] ...
	I0318 04:40:48.301122   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a24458b86a4"
	I0318 04:40:48.313389   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:40:48.313400   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 04:40:48.313425   17322 out.go:239] X Problems detected in kubelet:
	W0318 04:40:48.313430   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:40:48.313434   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:40:48.313507   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:40:48.313524   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:40:54.498197   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:40:54.498316   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:40:54.509971   17465 logs.go:276] 2 containers: [32dc048c4476 7dacaac7f891]
	I0318 04:40:54.510042   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:40:54.520770   17465 logs.go:276] 2 containers: [8c963566b500 8b75879fc7bf]
	I0318 04:40:54.520848   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:40:54.532602   17465 logs.go:276] 1 containers: [b66f543335d1]
	I0318 04:40:54.532679   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:40:54.543156   17465 logs.go:276] 2 containers: [e8b1c1b2cd19 d579e22e148e]
	I0318 04:40:54.543232   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:40:54.556063   17465 logs.go:276] 1 containers: [372e2774400e]
	I0318 04:40:54.556138   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:40:54.566546   17465 logs.go:276] 2 containers: [0be8d71b93fd fb25a67bf414]
	I0318 04:40:54.566618   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:40:54.576577   17465 logs.go:276] 0 containers: []
	W0318 04:40:54.576588   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:40:54.576644   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:40:54.587011   17465 logs.go:276] 2 containers: [81ae28fc685b 9e2192628e0f]
	I0318 04:40:54.587028   17465 logs.go:123] Gathering logs for kube-apiserver [32dc048c4476] ...
	I0318 04:40:54.587034   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dc048c4476"
	I0318 04:40:54.605749   17465 logs.go:123] Gathering logs for storage-provisioner [9e2192628e0f] ...
	I0318 04:40:54.605760   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e2192628e0f"
	I0318 04:40:54.617359   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:40:54.617370   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:40:54.629100   17465 logs.go:123] Gathering logs for etcd [8b75879fc7bf] ...
	I0318 04:40:54.629114   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b75879fc7bf"
	I0318 04:40:54.644236   17465 logs.go:123] Gathering logs for coredns [b66f543335d1] ...
	I0318 04:40:54.644247   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b66f543335d1"
	I0318 04:40:54.655585   17465 logs.go:123] Gathering logs for kube-controller-manager [0be8d71b93fd] ...
	I0318 04:40:54.655595   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0be8d71b93fd"
	I0318 04:40:54.672368   17465 logs.go:123] Gathering logs for kube-controller-manager [fb25a67bf414] ...
	I0318 04:40:54.672379   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb25a67bf414"
	I0318 04:40:54.684983   17465 logs.go:123] Gathering logs for storage-provisioner [81ae28fc685b] ...
	I0318 04:40:54.684994   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ae28fc685b"
	I0318 04:40:54.696282   17465 logs.go:123] Gathering logs for etcd [8c963566b500] ...
	I0318 04:40:54.696293   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c963566b500"
	I0318 04:40:54.709816   17465 logs.go:123] Gathering logs for kube-scheduler [d579e22e148e] ...
	I0318 04:40:54.710806   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d579e22e148e"
	I0318 04:40:54.725414   17465 logs.go:123] Gathering logs for kube-proxy [372e2774400e] ...
	I0318 04:40:54.725425   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 372e2774400e"
	I0318 04:40:54.740509   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:40:54.740519   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:40:54.763591   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:40:54.763602   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:40:54.800844   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:40:54.800854   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:40:54.804872   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:40:54.804879   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:40:54.841604   17465 logs.go:123] Gathering logs for kube-apiserver [7dacaac7f891] ...
	I0318 04:40:54.841618   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dacaac7f891"
	I0318 04:40:54.880134   17465 logs.go:123] Gathering logs for kube-scheduler [e8b1c1b2cd19] ...
	I0318 04:40:54.880147   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8b1c1b2cd19"
	I0318 04:40:57.393947   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:40:58.317365   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:41:02.396132   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:41:02.396342   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:41:02.414357   17465 logs.go:276] 2 containers: [32dc048c4476 7dacaac7f891]
	I0318 04:41:02.414452   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:41:02.427450   17465 logs.go:276] 2 containers: [8c963566b500 8b75879fc7bf]
	I0318 04:41:02.427527   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:41:02.439144   17465 logs.go:276] 1 containers: [b66f543335d1]
	I0318 04:41:02.439207   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:41:02.450273   17465 logs.go:276] 2 containers: [e8b1c1b2cd19 d579e22e148e]
	I0318 04:41:02.450339   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:41:02.461212   17465 logs.go:276] 1 containers: [372e2774400e]
	I0318 04:41:02.461284   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:41:02.471978   17465 logs.go:276] 2 containers: [0be8d71b93fd fb25a67bf414]
	I0318 04:41:02.472049   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:41:02.482410   17465 logs.go:276] 0 containers: []
	W0318 04:41:02.482422   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:41:02.482483   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:41:02.492711   17465 logs.go:276] 2 containers: [81ae28fc685b 9e2192628e0f]
	I0318 04:41:02.492733   17465 logs.go:123] Gathering logs for coredns [b66f543335d1] ...
	I0318 04:41:02.492740   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b66f543335d1"
	I0318 04:41:02.503802   17465 logs.go:123] Gathering logs for storage-provisioner [81ae28fc685b] ...
	I0318 04:41:02.503816   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ae28fc685b"
	I0318 04:41:02.515582   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:41:02.515593   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:41:02.538366   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:41:02.538377   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:41:02.542409   17465 logs.go:123] Gathering logs for kube-apiserver [7dacaac7f891] ...
	I0318 04:41:02.542417   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dacaac7f891"
	I0318 04:41:02.587553   17465 logs.go:123] Gathering logs for etcd [8b75879fc7bf] ...
	I0318 04:41:02.587567   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b75879fc7bf"
	I0318 04:41:02.602040   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:41:02.602051   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:41:02.614077   17465 logs.go:123] Gathering logs for kube-scheduler [d579e22e148e] ...
	I0318 04:41:02.614089   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d579e22e148e"
	I0318 04:41:02.628943   17465 logs.go:123] Gathering logs for kube-proxy [372e2774400e] ...
	I0318 04:41:02.628954   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 372e2774400e"
	I0318 04:41:02.640815   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:41:02.640826   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:41:02.679088   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:41:02.679096   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:41:02.712528   17465 logs.go:123] Gathering logs for kube-apiserver [32dc048c4476] ...
	I0318 04:41:02.712540   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dc048c4476"
	I0318 04:41:02.726406   17465 logs.go:123] Gathering logs for etcd [8c963566b500] ...
	I0318 04:41:02.726420   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c963566b500"
	I0318 04:41:02.746463   17465 logs.go:123] Gathering logs for kube-scheduler [e8b1c1b2cd19] ...
	I0318 04:41:02.746474   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8b1c1b2cd19"
	I0318 04:41:02.758187   17465 logs.go:123] Gathering logs for kube-controller-manager [0be8d71b93fd] ...
	I0318 04:41:02.758199   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0be8d71b93fd"
	I0318 04:41:02.776181   17465 logs.go:123] Gathering logs for kube-controller-manager [fb25a67bf414] ...
	I0318 04:41:02.776192   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb25a67bf414"
	I0318 04:41:02.788221   17465 logs.go:123] Gathering logs for storage-provisioner [9e2192628e0f] ...
	I0318 04:41:02.788232   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e2192628e0f"
	I0318 04:41:03.319522   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:41:03.319711   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:41:03.333138   17322 logs.go:276] 1 containers: [d454e6154049]
	I0318 04:41:03.333212   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:41:03.344395   17322 logs.go:276] 1 containers: [8046e42578d2]
	I0318 04:41:03.344475   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:41:03.355500   17322 logs.go:276] 4 containers: [0086537aa016 0a040eebb706 367d0316359f 3a24458b86a4]
	I0318 04:41:03.355575   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:41:03.365951   17322 logs.go:276] 1 containers: [894b6a0a0702]
	I0318 04:41:03.366022   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:41:03.380045   17322 logs.go:276] 1 containers: [04d6cdf60161]
	I0318 04:41:03.380114   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:41:03.391824   17322 logs.go:276] 1 containers: [22a920f51952]
	I0318 04:41:03.391906   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:41:03.401941   17322 logs.go:276] 0 containers: []
	W0318 04:41:03.401952   17322 logs.go:278] No container was found matching "kindnet"
	I0318 04:41:03.402010   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:41:03.412937   17322 logs.go:276] 1 containers: [5bfb08f2c96a]
	I0318 04:41:03.412952   17322 logs.go:123] Gathering logs for Docker ...
	I0318 04:41:03.412957   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:41:03.436911   17322 logs.go:123] Gathering logs for dmesg ...
	I0318 04:41:03.436919   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:41:03.441094   17322 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:41:03.441104   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:41:03.477829   17322 logs.go:123] Gathering logs for kube-apiserver [d454e6154049] ...
	I0318 04:41:03.477841   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d454e6154049"
	I0318 04:41:03.494518   17322 logs.go:123] Gathering logs for etcd [8046e42578d2] ...
	I0318 04:41:03.494531   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8046e42578d2"
	I0318 04:41:03.510374   17322 logs.go:123] Gathering logs for coredns [0086537aa016] ...
	I0318 04:41:03.510385   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0086537aa016"
	I0318 04:41:03.521842   17322 logs.go:123] Gathering logs for coredns [0a040eebb706] ...
	I0318 04:41:03.521853   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a040eebb706"
	I0318 04:41:03.533929   17322 logs.go:123] Gathering logs for coredns [367d0316359f] ...
	I0318 04:41:03.533944   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 367d0316359f"
	I0318 04:41:03.546045   17322 logs.go:123] Gathering logs for kube-proxy [04d6cdf60161] ...
	I0318 04:41:03.546056   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d6cdf60161"
	I0318 04:41:03.558666   17322 logs.go:123] Gathering logs for kube-controller-manager [22a920f51952] ...
	I0318 04:41:03.558677   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22a920f51952"
	I0318 04:41:03.576808   17322 logs.go:123] Gathering logs for container status ...
	I0318 04:41:03.576818   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:41:03.588548   17322 logs.go:123] Gathering logs for kubelet ...
	I0318 04:41:03.588559   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 04:41:03.606669   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:41:03.606763   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:41:03.624223   17322 logs.go:123] Gathering logs for coredns [3a24458b86a4] ...
	I0318 04:41:03.624230   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a24458b86a4"
	I0318 04:41:03.635980   17322 logs.go:123] Gathering logs for kube-scheduler [894b6a0a0702] ...
	I0318 04:41:03.635991   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 894b6a0a0702"
	I0318 04:41:03.650622   17322 logs.go:123] Gathering logs for storage-provisioner [5bfb08f2c96a] ...
	I0318 04:41:03.650631   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bfb08f2c96a"
	I0318 04:41:03.662656   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:41:03.662665   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 04:41:03.662693   17322 out.go:239] X Problems detected in kubelet:
	W0318 04:41:03.662697   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:41:03.662702   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:41:03.662706   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:41:03.662709   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:41:05.299600   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:41:10.301742   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:41:10.301929   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:41:10.316717   17465 logs.go:276] 2 containers: [32dc048c4476 7dacaac7f891]
	I0318 04:41:10.316806   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:41:10.328415   17465 logs.go:276] 2 containers: [8c963566b500 8b75879fc7bf]
	I0318 04:41:10.328486   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:41:10.338768   17465 logs.go:276] 1 containers: [b66f543335d1]
	I0318 04:41:10.338845   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:41:10.349345   17465 logs.go:276] 2 containers: [e8b1c1b2cd19 d579e22e148e]
	I0318 04:41:10.349417   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:41:10.360115   17465 logs.go:276] 1 containers: [372e2774400e]
	I0318 04:41:10.360186   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:41:10.371023   17465 logs.go:276] 2 containers: [0be8d71b93fd fb25a67bf414]
	I0318 04:41:10.371093   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:41:10.381970   17465 logs.go:276] 0 containers: []
	W0318 04:41:10.381982   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:41:10.382049   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:41:10.392835   17465 logs.go:276] 2 containers: [81ae28fc685b 9e2192628e0f]
	I0318 04:41:10.392852   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:41:10.392857   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:41:10.428670   17465 logs.go:123] Gathering logs for etcd [8c963566b500] ...
	I0318 04:41:10.428688   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c963566b500"
	I0318 04:41:10.442940   17465 logs.go:123] Gathering logs for kube-scheduler [e8b1c1b2cd19] ...
	I0318 04:41:10.442950   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8b1c1b2cd19"
	I0318 04:41:10.454657   17465 logs.go:123] Gathering logs for kube-scheduler [d579e22e148e] ...
	I0318 04:41:10.454670   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d579e22e148e"
	I0318 04:41:10.469266   17465 logs.go:123] Gathering logs for kube-apiserver [7dacaac7f891] ...
	I0318 04:41:10.469275   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dacaac7f891"
	I0318 04:41:10.507827   17465 logs.go:123] Gathering logs for coredns [b66f543335d1] ...
	I0318 04:41:10.507842   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b66f543335d1"
	I0318 04:41:10.519249   17465 logs.go:123] Gathering logs for etcd [8b75879fc7bf] ...
	I0318 04:41:10.519261   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b75879fc7bf"
	I0318 04:41:10.533349   17465 logs.go:123] Gathering logs for kube-controller-manager [0be8d71b93fd] ...
	I0318 04:41:10.533359   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0be8d71b93fd"
	I0318 04:41:10.551478   17465 logs.go:123] Gathering logs for kube-controller-manager [fb25a67bf414] ...
	I0318 04:41:10.551489   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb25a67bf414"
	I0318 04:41:10.565430   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:41:10.565440   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:41:10.588274   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:41:10.588282   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:41:10.601123   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:41:10.601133   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:41:10.638491   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:41:10.638500   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:41:10.642384   17465 logs.go:123] Gathering logs for kube-apiserver [32dc048c4476] ...
	I0318 04:41:10.642392   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dc048c4476"
	I0318 04:41:10.655836   17465 logs.go:123] Gathering logs for kube-proxy [372e2774400e] ...
	I0318 04:41:10.655846   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 372e2774400e"
	I0318 04:41:10.672120   17465 logs.go:123] Gathering logs for storage-provisioner [81ae28fc685b] ...
	I0318 04:41:10.672130   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ae28fc685b"
	I0318 04:41:10.683803   17465 logs.go:123] Gathering logs for storage-provisioner [9e2192628e0f] ...
	I0318 04:41:10.683817   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e2192628e0f"
	I0318 04:41:13.197026   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:41:13.666628   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:41:18.199346   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:41:18.199715   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:41:18.234275   17465 logs.go:276] 2 containers: [32dc048c4476 7dacaac7f891]
	I0318 04:41:18.234406   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:41:18.252118   17465 logs.go:276] 2 containers: [8c963566b500 8b75879fc7bf]
	I0318 04:41:18.252205   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:41:18.266043   17465 logs.go:276] 1 containers: [b66f543335d1]
	I0318 04:41:18.266122   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:41:18.277493   17465 logs.go:276] 2 containers: [e8b1c1b2cd19 d579e22e148e]
	I0318 04:41:18.277577   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:41:18.287715   17465 logs.go:276] 1 containers: [372e2774400e]
	I0318 04:41:18.287785   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:41:18.298401   17465 logs.go:276] 2 containers: [0be8d71b93fd fb25a67bf414]
	I0318 04:41:18.298466   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:41:18.308373   17465 logs.go:276] 0 containers: []
	W0318 04:41:18.308384   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:41:18.308439   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:41:18.319799   17465 logs.go:276] 2 containers: [81ae28fc685b 9e2192628e0f]
	I0318 04:41:18.319837   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:41:18.319843   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:41:18.324605   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:41:18.324613   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:41:18.360782   17465 logs.go:123] Gathering logs for kube-scheduler [e8b1c1b2cd19] ...
	I0318 04:41:18.360794   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8b1c1b2cd19"
	I0318 04:41:18.374845   17465 logs.go:123] Gathering logs for kube-proxy [372e2774400e] ...
	I0318 04:41:18.374859   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 372e2774400e"
	I0318 04:41:18.402473   17465 logs.go:123] Gathering logs for storage-provisioner [9e2192628e0f] ...
	I0318 04:41:18.402488   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e2192628e0f"
	I0318 04:41:18.424473   17465 logs.go:123] Gathering logs for kube-apiserver [32dc048c4476] ...
	I0318 04:41:18.424487   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dc048c4476"
	I0318 04:41:18.441991   17465 logs.go:123] Gathering logs for kube-controller-manager [0be8d71b93fd] ...
	I0318 04:41:18.442002   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0be8d71b93fd"
	I0318 04:41:18.459812   17465 logs.go:123] Gathering logs for kube-controller-manager [fb25a67bf414] ...
	I0318 04:41:18.459824   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb25a67bf414"
	I0318 04:41:18.472275   17465 logs.go:123] Gathering logs for storage-provisioner [81ae28fc685b] ...
	I0318 04:41:18.472288   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ae28fc685b"
	I0318 04:41:18.483351   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:41:18.483360   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:41:18.495353   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:41:18.495365   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:41:18.531963   17465 logs.go:123] Gathering logs for kube-apiserver [7dacaac7f891] ...
	I0318 04:41:18.531971   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dacaac7f891"
	I0318 04:41:18.568755   17465 logs.go:123] Gathering logs for etcd [8b75879fc7bf] ...
	I0318 04:41:18.568771   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b75879fc7bf"
	I0318 04:41:18.586300   17465 logs.go:123] Gathering logs for coredns [b66f543335d1] ...
	I0318 04:41:18.586311   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b66f543335d1"
	I0318 04:41:18.597519   17465 logs.go:123] Gathering logs for etcd [8c963566b500] ...
	I0318 04:41:18.597530   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c963566b500"
	I0318 04:41:18.611898   17465 logs.go:123] Gathering logs for kube-scheduler [d579e22e148e] ...
	I0318 04:41:18.611909   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d579e22e148e"
	I0318 04:41:18.627536   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:41:18.627552   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:41:18.668744   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:41:18.668823   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:41:18.679008   17322 logs.go:276] 1 containers: [d454e6154049]
	I0318 04:41:18.679086   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:41:18.690075   17322 logs.go:276] 1 containers: [8046e42578d2]
	I0318 04:41:18.690143   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:41:18.700372   17322 logs.go:276] 4 containers: [0086537aa016 0a040eebb706 367d0316359f 3a24458b86a4]
	I0318 04:41:18.700453   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:41:18.712143   17322 logs.go:276] 1 containers: [894b6a0a0702]
	I0318 04:41:18.712219   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:41:18.722789   17322 logs.go:276] 1 containers: [04d6cdf60161]
	I0318 04:41:18.722867   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:41:18.732850   17322 logs.go:276] 1 containers: [22a920f51952]
	I0318 04:41:18.732920   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:41:18.742759   17322 logs.go:276] 0 containers: []
	W0318 04:41:18.742770   17322 logs.go:278] No container was found matching "kindnet"
	I0318 04:41:18.742829   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:41:18.755277   17322 logs.go:276] 1 containers: [5bfb08f2c96a]
	I0318 04:41:18.755295   17322 logs.go:123] Gathering logs for kube-scheduler [894b6a0a0702] ...
	I0318 04:41:18.755300   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 894b6a0a0702"
	I0318 04:41:18.770233   17322 logs.go:123] Gathering logs for kube-apiserver [d454e6154049] ...
	I0318 04:41:18.770246   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d454e6154049"
	I0318 04:41:18.784476   17322 logs.go:123] Gathering logs for coredns [0a040eebb706] ...
	I0318 04:41:18.784488   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a040eebb706"
	I0318 04:41:18.796201   17322 logs.go:123] Gathering logs for Docker ...
	I0318 04:41:18.796211   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:41:18.821894   17322 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:41:18.821909   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:41:18.856326   17322 logs.go:123] Gathering logs for coredns [367d0316359f] ...
	I0318 04:41:18.856338   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 367d0316359f"
	I0318 04:41:18.868188   17322 logs.go:123] Gathering logs for etcd [8046e42578d2] ...
	I0318 04:41:18.868201   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8046e42578d2"
	I0318 04:41:18.886779   17322 logs.go:123] Gathering logs for coredns [0086537aa016] ...
	I0318 04:41:18.886789   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0086537aa016"
	I0318 04:41:18.902502   17322 logs.go:123] Gathering logs for coredns [3a24458b86a4] ...
	I0318 04:41:18.902518   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a24458b86a4"
	I0318 04:41:18.913849   17322 logs.go:123] Gathering logs for kube-proxy [04d6cdf60161] ...
	I0318 04:41:18.913864   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d6cdf60161"
	I0318 04:41:18.925487   17322 logs.go:123] Gathering logs for kube-controller-manager [22a920f51952] ...
	I0318 04:41:18.925501   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22a920f51952"
	I0318 04:41:18.943169   17322 logs.go:123] Gathering logs for storage-provisioner [5bfb08f2c96a] ...
	I0318 04:41:18.943182   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bfb08f2c96a"
	I0318 04:41:18.974399   17322 logs.go:123] Gathering logs for kubelet ...
	I0318 04:41:18.974409   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 04:41:18.990687   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:41:18.990780   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:41:19.008322   17322 logs.go:123] Gathering logs for dmesg ...
	I0318 04:41:19.008328   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:41:19.013088   17322 logs.go:123] Gathering logs for container status ...
	I0318 04:41:19.013100   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:41:19.024142   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:41:19.024154   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 04:41:19.024182   17322 out.go:239] X Problems detected in kubelet:
	W0318 04:41:19.024186   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:41:19.024190   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:41:19.024195   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:41:19.024197   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:41:21.152763   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:41:26.154796   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:41:26.154913   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:41:26.166051   17465 logs.go:276] 2 containers: [32dc048c4476 7dacaac7f891]
	I0318 04:41:26.166130   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:41:26.178249   17465 logs.go:276] 2 containers: [8c963566b500 8b75879fc7bf]
	I0318 04:41:26.178325   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:41:26.188754   17465 logs.go:276] 1 containers: [b66f543335d1]
	I0318 04:41:26.188827   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:41:26.198933   17465 logs.go:276] 2 containers: [e8b1c1b2cd19 d579e22e148e]
	I0318 04:41:26.199001   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:41:26.209164   17465 logs.go:276] 1 containers: [372e2774400e]
	I0318 04:41:26.209229   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:41:26.219584   17465 logs.go:276] 2 containers: [0be8d71b93fd fb25a67bf414]
	I0318 04:41:26.219656   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:41:26.229848   17465 logs.go:276] 0 containers: []
	W0318 04:41:26.229858   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:41:26.229917   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:41:26.246030   17465 logs.go:276] 2 containers: [81ae28fc685b 9e2192628e0f]
	I0318 04:41:26.246060   17465 logs.go:123] Gathering logs for kube-apiserver [32dc048c4476] ...
	I0318 04:41:26.246066   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dc048c4476"
	I0318 04:41:26.260067   17465 logs.go:123] Gathering logs for kube-scheduler [d579e22e148e] ...
	I0318 04:41:26.260078   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d579e22e148e"
	I0318 04:41:26.275122   17465 logs.go:123] Gathering logs for storage-provisioner [9e2192628e0f] ...
	I0318 04:41:26.275135   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e2192628e0f"
	I0318 04:41:26.286580   17465 logs.go:123] Gathering logs for coredns [b66f543335d1] ...
	I0318 04:41:26.286590   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b66f543335d1"
	I0318 04:41:26.297637   17465 logs.go:123] Gathering logs for kube-scheduler [e8b1c1b2cd19] ...
	I0318 04:41:26.297650   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8b1c1b2cd19"
	I0318 04:41:26.309161   17465 logs.go:123] Gathering logs for kube-controller-manager [0be8d71b93fd] ...
	I0318 04:41:26.309174   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0be8d71b93fd"
	I0318 04:41:26.327246   17465 logs.go:123] Gathering logs for kube-controller-manager [fb25a67bf414] ...
	I0318 04:41:26.327259   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb25a67bf414"
	I0318 04:41:26.339306   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:41:26.339324   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:41:26.377265   17465 logs.go:123] Gathering logs for etcd [8c963566b500] ...
	I0318 04:41:26.377274   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c963566b500"
	I0318 04:41:26.391181   17465 logs.go:123] Gathering logs for etcd [8b75879fc7bf] ...
	I0318 04:41:26.391193   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b75879fc7bf"
	I0318 04:41:26.410548   17465 logs.go:123] Gathering logs for kube-apiserver [7dacaac7f891] ...
	I0318 04:41:26.410558   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dacaac7f891"
	I0318 04:41:26.447864   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:41:26.447875   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:41:26.460011   17465 logs.go:123] Gathering logs for storage-provisioner [81ae28fc685b] ...
	I0318 04:41:26.460022   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ae28fc685b"
	I0318 04:41:26.471716   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:41:26.471728   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:41:26.493593   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:41:26.493603   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:41:26.497487   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:41:26.497493   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:41:26.533083   17465 logs.go:123] Gathering logs for kube-proxy [372e2774400e] ...
	I0318 04:41:26.533096   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 372e2774400e"
	I0318 04:41:29.045576   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:41:29.028091   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:41:34.047622   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:41:34.047791   17465 kubeadm.go:591] duration metric: took 4m3.88363675s to restartPrimaryControlPlane
	W0318 04:41:34.047905   17465 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 04:41:34.047949   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0318 04:41:35.115431   17465 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.06750525s)
	I0318 04:41:35.115501   17465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 04:41:35.121130   17465 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 04:41:35.124126   17465 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 04:41:35.127044   17465 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 04:41:35.127050   17465 kubeadm.go:156] found existing configuration files:
	
	I0318 04:41:35.127077   17465 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53534 /etc/kubernetes/admin.conf
	I0318 04:41:35.129657   17465 kubeadm.go:162] "https://control-plane.minikube.internal:53534" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53534 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 04:41:35.129685   17465 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 04:41:35.132169   17465 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53534 /etc/kubernetes/kubelet.conf
	I0318 04:41:35.134821   17465 kubeadm.go:162] "https://control-plane.minikube.internal:53534" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53534 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 04:41:35.134847   17465 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 04:41:35.137300   17465 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53534 /etc/kubernetes/controller-manager.conf
	I0318 04:41:35.139995   17465 kubeadm.go:162] "https://control-plane.minikube.internal:53534" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53534 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 04:41:35.140021   17465 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 04:41:35.142973   17465 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53534 /etc/kubernetes/scheduler.conf
	I0318 04:41:35.145470   17465 kubeadm.go:162] "https://control-plane.minikube.internal:53534" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53534 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 04:41:35.145495   17465 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 04:41:35.148093   17465 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 04:41:35.165406   17465 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0318 04:41:35.165435   17465 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 04:41:35.216267   17465 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 04:41:35.216419   17465 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 04:41:35.216482   17465 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 04:41:35.268010   17465 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 04:41:35.271247   17465 out.go:204]   - Generating certificates and keys ...
	I0318 04:41:35.271282   17465 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 04:41:35.271310   17465 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 04:41:35.271345   17465 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 04:41:35.271371   17465 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 04:41:35.271402   17465 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 04:41:35.271426   17465 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 04:41:35.271454   17465 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 04:41:35.271481   17465 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 04:41:35.271514   17465 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 04:41:35.271547   17465 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 04:41:35.271564   17465 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 04:41:35.271589   17465 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 04:41:35.329023   17465 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 04:41:35.491546   17465 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 04:41:35.797551   17465 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 04:41:35.953699   17465 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 04:41:35.986737   17465 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 04:41:35.987097   17465 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 04:41:35.987152   17465 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 04:41:36.080368   17465 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 04:41:34.030250   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:41:34.030743   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:41:34.076759   17322 logs.go:276] 1 containers: [d454e6154049]
	I0318 04:41:34.076893   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:41:34.099859   17322 logs.go:276] 1 containers: [8046e42578d2]
	I0318 04:41:34.099952   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:41:34.115763   17322 logs.go:276] 4 containers: [0086537aa016 0a040eebb706 367d0316359f 3a24458b86a4]
	I0318 04:41:34.115852   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:41:34.133753   17322 logs.go:276] 1 containers: [894b6a0a0702]
	I0318 04:41:34.133832   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:41:34.146109   17322 logs.go:276] 1 containers: [04d6cdf60161]
	I0318 04:41:34.146180   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:41:34.157510   17322 logs.go:276] 1 containers: [22a920f51952]
	I0318 04:41:34.157587   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:41:34.168961   17322 logs.go:276] 0 containers: []
	W0318 04:41:34.168972   17322 logs.go:278] No container was found matching "kindnet"
	I0318 04:41:34.169031   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:41:34.180810   17322 logs.go:276] 1 containers: [5bfb08f2c96a]
	I0318 04:41:34.180829   17322 logs.go:123] Gathering logs for dmesg ...
	I0318 04:41:34.180835   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:41:34.185350   17322 logs.go:123] Gathering logs for coredns [0a040eebb706] ...
	I0318 04:41:34.185358   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a040eebb706"
	I0318 04:41:34.197437   17322 logs.go:123] Gathering logs for kube-proxy [04d6cdf60161] ...
	I0318 04:41:34.197446   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d6cdf60161"
	I0318 04:41:34.221843   17322 logs.go:123] Gathering logs for coredns [0086537aa016] ...
	I0318 04:41:34.221857   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0086537aa016"
	I0318 04:41:34.235659   17322 logs.go:123] Gathering logs for coredns [3a24458b86a4] ...
	I0318 04:41:34.235670   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a24458b86a4"
	I0318 04:41:34.251102   17322 logs.go:123] Gathering logs for kube-scheduler [894b6a0a0702] ...
	I0318 04:41:34.251114   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 894b6a0a0702"
	I0318 04:41:34.267266   17322 logs.go:123] Gathering logs for storage-provisioner [5bfb08f2c96a] ...
	I0318 04:41:34.267277   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bfb08f2c96a"
	I0318 04:41:34.281431   17322 logs.go:123] Gathering logs for kube-controller-manager [22a920f51952] ...
	I0318 04:41:34.281443   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22a920f51952"
	I0318 04:41:34.299877   17322 logs.go:123] Gathering logs for Docker ...
	I0318 04:41:34.299895   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:41:34.326637   17322 logs.go:123] Gathering logs for container status ...
	I0318 04:41:34.326652   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:41:34.339937   17322 logs.go:123] Gathering logs for kubelet ...
	I0318 04:41:34.339950   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 04:41:34.359367   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:41:34.359468   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:41:34.377511   17322 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:41:34.377534   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:41:34.414308   17322 logs.go:123] Gathering logs for kube-apiserver [d454e6154049] ...
	I0318 04:41:34.414318   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d454e6154049"
	I0318 04:41:34.432250   17322 logs.go:123] Gathering logs for etcd [8046e42578d2] ...
	I0318 04:41:34.432262   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8046e42578d2"
	I0318 04:41:34.448196   17322 logs.go:123] Gathering logs for coredns [367d0316359f] ...
	I0318 04:41:34.448209   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 367d0316359f"
	I0318 04:41:34.461418   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:41:34.461429   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 04:41:34.461456   17322 out.go:239] X Problems detected in kubelet:
	W0318 04:41:34.461460   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:41:34.461464   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:41:34.461470   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:41:34.461473   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:41:36.084497   17465 out.go:204]   - Booting up control plane ...
	I0318 04:41:36.084542   17465 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 04:41:36.084588   17465 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 04:41:36.084621   17465 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 04:41:36.084668   17465 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 04:41:36.084823   17465 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 04:41:40.586637   17465 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.501843 seconds
	I0318 04:41:40.586703   17465 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 04:41:40.590734   17465 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 04:41:41.100940   17465 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 04:41:41.101236   17465 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-126000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 04:41:41.609427   17465 kubeadm.go:309] [bootstrap-token] Using token: arcfy8.vmv9i1qd2i42rxej
	I0318 04:41:41.614239   17465 out.go:204]   - Configuring RBAC rules ...
	I0318 04:41:41.614312   17465 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 04:41:41.616653   17465 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 04:41:41.622031   17465 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 04:41:41.623046   17465 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 04:41:41.624185   17465 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 04:41:41.625084   17465 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 04:41:41.628983   17465 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 04:41:41.810048   17465 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 04:41:42.018465   17465 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 04:41:42.018923   17465 kubeadm.go:309] 
	I0318 04:41:42.018955   17465 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 04:41:42.018963   17465 kubeadm.go:309] 
	I0318 04:41:42.019012   17465 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 04:41:42.019018   17465 kubeadm.go:309] 
	I0318 04:41:42.019037   17465 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 04:41:42.019071   17465 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 04:41:42.019101   17465 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 04:41:42.019107   17465 kubeadm.go:309] 
	I0318 04:41:42.019137   17465 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 04:41:42.019140   17465 kubeadm.go:309] 
	I0318 04:41:42.019165   17465 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 04:41:42.019169   17465 kubeadm.go:309] 
	I0318 04:41:42.019198   17465 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 04:41:42.019243   17465 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 04:41:42.019281   17465 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 04:41:42.019286   17465 kubeadm.go:309] 
	I0318 04:41:42.019341   17465 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 04:41:42.019388   17465 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 04:41:42.019393   17465 kubeadm.go:309] 
	I0318 04:41:42.019453   17465 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token arcfy8.vmv9i1qd2i42rxej \
	I0318 04:41:42.019507   17465 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:2762dffea2ede86231df0e7bc748eefca9b65ca5bd96e5f605bd5b60ef0281dd \
	I0318 04:41:42.019520   17465 kubeadm.go:309] 	--control-plane 
	I0318 04:41:42.019529   17465 kubeadm.go:309] 
	I0318 04:41:42.019578   17465 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 04:41:42.019582   17465 kubeadm.go:309] 
	I0318 04:41:42.019622   17465 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token arcfy8.vmv9i1qd2i42rxej \
	I0318 04:41:42.019677   17465 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:2762dffea2ede86231df0e7bc748eefca9b65ca5bd96e5f605bd5b60ef0281dd 
	I0318 04:41:42.019790   17465 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 04:41:42.019855   17465 cni.go:84] Creating CNI manager for ""
	I0318 04:41:42.019864   17465 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:41:42.023536   17465 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 04:41:42.030799   17465 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 04:41:42.033923   17465 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 04:41:42.039310   17465 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 04:41:42.039352   17465 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 04:41:42.039374   17465 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-126000 minikube.k8s.io/updated_at=2024_03_18T04_41_42_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a minikube.k8s.io/name=stopped-upgrade-126000 minikube.k8s.io/primary=true
	I0318 04:41:42.080517   17465 kubeadm.go:1107] duration metric: took 41.209959ms to wait for elevateKubeSystemPrivileges
	I0318 04:41:42.080565   17465 ops.go:34] apiserver oom_adj: -16
	W0318 04:41:42.080580   17465 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 04:41:42.080584   17465 kubeadm.go:393] duration metric: took 4m11.930901333s to StartCluster
	I0318 04:41:42.080594   17465 settings.go:142] acquiring lock: {Name:mk8634ba9e118796c1213288fbf27edefcbb67ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:41:42.080688   17465 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:41:42.081124   17465 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18429-15072/kubeconfig: {Name:mkeb86e27ccdf30a065b43661cfe2af2dc198b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:41:42.081345   17465 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:41:42.085741   17465 out.go:177] * Verifying Kubernetes components...
	I0318 04:41:42.081390   17465 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 04:41:42.081443   17465 config.go:182] Loaded profile config "stopped-upgrade-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 04:41:42.093720   17465 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 04:41:42.093740   17465 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-126000"
	I0318 04:41:42.093752   17465 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-126000"
	I0318 04:41:42.093755   17465 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-126000"
	I0318 04:41:42.093769   17465 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-126000"
	W0318 04:41:42.093775   17465 addons.go:243] addon storage-provisioner should already be in state true
	I0318 04:41:42.093786   17465 host.go:66] Checking if "stopped-upgrade-126000" exists ...
	I0318 04:41:42.095439   17465 kapi.go:59] client config for stopped-upgrade-126000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/stopped-upgrade-126000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/stopped-upgrade-126000/client.key", CAFile:"/Users/jenkins/minikube-integration/18429-15072/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103d62a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 04:41:42.095553   17465 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-126000"
	W0318 04:41:42.095560   17465 addons.go:243] addon default-storageclass should already be in state true
	I0318 04:41:42.095568   17465 host.go:66] Checking if "stopped-upgrade-126000" exists ...
	I0318 04:41:42.100732   17465 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 04:41:42.104709   17465 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 04:41:42.104716   17465 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 04:41:42.104724   17465 sshutil.go:53] new ssh client: &{IP:localhost Port:53501 SSHKeyPath:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/stopped-upgrade-126000/id_rsa Username:docker}
	I0318 04:41:42.105366   17465 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 04:41:42.105370   17465 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 04:41:42.105374   17465 sshutil.go:53] new ssh client: &{IP:localhost Port:53501 SSHKeyPath:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/stopped-upgrade-126000/id_rsa Username:docker}
	I0318 04:41:42.186441   17465 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 04:41:42.191558   17465 api_server.go:52] waiting for apiserver process to appear ...
	I0318 04:41:42.191607   17465 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 04:41:42.195230   17465 api_server.go:72] duration metric: took 113.877666ms to wait for apiserver process to appear ...
	I0318 04:41:42.195237   17465 api_server.go:88] waiting for apiserver healthz status ...
	I0318 04:41:42.195244   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:41:42.221682   17465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 04:41:42.223713   17465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 04:41:44.464826   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:41:47.197157   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:41:47.197200   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:41:49.466167   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:41:49.466302   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:41:49.479359   17322 logs.go:276] 1 containers: [d454e6154049]
	I0318 04:41:49.479439   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:41:49.490107   17322 logs.go:276] 1 containers: [8046e42578d2]
	I0318 04:41:49.490172   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:41:49.500590   17322 logs.go:276] 4 containers: [0086537aa016 0a040eebb706 367d0316359f 3a24458b86a4]
	I0318 04:41:49.500665   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:41:49.512833   17322 logs.go:276] 1 containers: [894b6a0a0702]
	I0318 04:41:49.512902   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:41:49.523016   17322 logs.go:276] 1 containers: [04d6cdf60161]
	I0318 04:41:49.523083   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:41:49.533746   17322 logs.go:276] 1 containers: [22a920f51952]
	I0318 04:41:49.533806   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:41:49.543622   17322 logs.go:276] 0 containers: []
	W0318 04:41:49.543635   17322 logs.go:278] No container was found matching "kindnet"
	I0318 04:41:49.543695   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:41:49.554550   17322 logs.go:276] 1 containers: [5bfb08f2c96a]
	I0318 04:41:49.554568   17322 logs.go:123] Gathering logs for kube-apiserver [d454e6154049] ...
	I0318 04:41:49.554574   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d454e6154049"
	I0318 04:41:49.568491   17322 logs.go:123] Gathering logs for etcd [8046e42578d2] ...
	I0318 04:41:49.568502   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8046e42578d2"
	I0318 04:41:49.582393   17322 logs.go:123] Gathering logs for coredns [0a040eebb706] ...
	I0318 04:41:49.582404   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a040eebb706"
	I0318 04:41:49.594344   17322 logs.go:123] Gathering logs for container status ...
	I0318 04:41:49.594354   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:41:49.607291   17322 logs.go:123] Gathering logs for kubelet ...
	I0318 04:41:49.607301   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 04:41:49.625131   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:41:49.625224   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:41:49.642395   17322 logs.go:123] Gathering logs for coredns [0086537aa016] ...
	I0318 04:41:49.642402   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0086537aa016"
	I0318 04:41:49.653576   17322 logs.go:123] Gathering logs for kube-scheduler [894b6a0a0702] ...
	I0318 04:41:49.653586   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 894b6a0a0702"
	I0318 04:41:49.667879   17322 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:41:49.667889   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:41:49.701806   17322 logs.go:123] Gathering logs for kube-proxy [04d6cdf60161] ...
	I0318 04:41:49.701817   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d6cdf60161"
	I0318 04:41:49.713960   17322 logs.go:123] Gathering logs for kube-controller-manager [22a920f51952] ...
	I0318 04:41:49.713969   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22a920f51952"
	I0318 04:41:49.731558   17322 logs.go:123] Gathering logs for storage-provisioner [5bfb08f2c96a] ...
	I0318 04:41:49.731569   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bfb08f2c96a"
	I0318 04:41:49.743144   17322 logs.go:123] Gathering logs for dmesg ...
	I0318 04:41:49.743153   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:41:49.747898   17322 logs.go:123] Gathering logs for coredns [367d0316359f] ...
	I0318 04:41:49.747907   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 367d0316359f"
	I0318 04:41:49.759600   17322 logs.go:123] Gathering logs for coredns [3a24458b86a4] ...
	I0318 04:41:49.759609   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a24458b86a4"
	I0318 04:41:49.772049   17322 logs.go:123] Gathering logs for Docker ...
	I0318 04:41:49.772060   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:41:49.795615   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:41:49.795623   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 04:41:49.795656   17322 out.go:239] X Problems detected in kubelet:
	W0318 04:41:49.795665   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:41:49.795670   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:41:49.795678   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:41:49.795682   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:41:52.197383   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:41:52.197417   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:41:57.197644   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:41:57.197679   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:41:59.798278   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:42:02.198000   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:42:02.198042   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:42:04.800519   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:42:04.801017   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:42:04.844485   17322 logs.go:276] 1 containers: [d454e6154049]
	I0318 04:42:04.844634   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:42:04.865373   17322 logs.go:276] 1 containers: [8046e42578d2]
	I0318 04:42:04.865469   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:42:04.883869   17322 logs.go:276] 4 containers: [0086537aa016 0a040eebb706 367d0316359f 3a24458b86a4]
	I0318 04:42:04.883955   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:42:04.895280   17322 logs.go:276] 1 containers: [894b6a0a0702]
	I0318 04:42:04.895344   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:42:04.906265   17322 logs.go:276] 1 containers: [04d6cdf60161]
	I0318 04:42:04.906332   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:42:04.917447   17322 logs.go:276] 1 containers: [22a920f51952]
	I0318 04:42:04.917507   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:42:04.928538   17322 logs.go:276] 0 containers: []
	W0318 04:42:04.928549   17322 logs.go:278] No container was found matching "kindnet"
	I0318 04:42:04.928609   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:42:04.939543   17322 logs.go:276] 1 containers: [5bfb08f2c96a]
	I0318 04:42:04.939561   17322 logs.go:123] Gathering logs for container status ...
	I0318 04:42:04.939567   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:42:04.950985   17322 logs.go:123] Gathering logs for kube-apiserver [d454e6154049] ...
	I0318 04:42:04.950997   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d454e6154049"
	I0318 04:42:04.966394   17322 logs.go:123] Gathering logs for storage-provisioner [5bfb08f2c96a] ...
	I0318 04:42:04.966406   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bfb08f2c96a"
	I0318 04:42:04.977962   17322 logs.go:123] Gathering logs for Docker ...
	I0318 04:42:04.977972   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:42:05.001285   17322 logs.go:123] Gathering logs for dmesg ...
	I0318 04:42:05.001307   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:42:05.006121   17322 logs.go:123] Gathering logs for kube-controller-manager [22a920f51952] ...
	I0318 04:42:05.006129   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22a920f51952"
	I0318 04:42:05.023719   17322 logs.go:123] Gathering logs for kube-proxy [04d6cdf60161] ...
	I0318 04:42:05.023729   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d6cdf60161"
	I0318 04:42:05.035788   17322 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:42:05.035798   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:42:05.072806   17322 logs.go:123] Gathering logs for etcd [8046e42578d2] ...
	I0318 04:42:05.072816   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8046e42578d2"
	I0318 04:42:05.087401   17322 logs.go:123] Gathering logs for coredns [0086537aa016] ...
	I0318 04:42:05.087411   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0086537aa016"
	I0318 04:42:05.100172   17322 logs.go:123] Gathering logs for coredns [3a24458b86a4] ...
	I0318 04:42:05.100184   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a24458b86a4"
	I0318 04:42:05.116341   17322 logs.go:123] Gathering logs for kube-scheduler [894b6a0a0702] ...
	I0318 04:42:05.116352   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 894b6a0a0702"
	I0318 04:42:05.130935   17322 logs.go:123] Gathering logs for kubelet ...
	I0318 04:42:05.130945   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 04:42:05.148882   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:42:05.148975   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:42:05.166489   17322 logs.go:123] Gathering logs for coredns [0a040eebb706] ...
	I0318 04:42:05.166494   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a040eebb706"
	I0318 04:42:05.178660   17322 logs.go:123] Gathering logs for coredns [367d0316359f] ...
	I0318 04:42:05.178671   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 367d0316359f"
	I0318 04:42:05.190930   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:42:05.190941   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 04:42:05.190970   17322 out.go:239] X Problems detected in kubelet:
	W0318 04:42:05.190974   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:42:05.190978   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:42:05.190982   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:42:05.190985   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:42:07.198574   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:42:07.198628   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:42:12.199278   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:42:12.199325   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0318 04:42:12.596624   17465 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0318 04:42:12.605006   17465 out.go:177] * Enabled addons: storage-provisioner
	I0318 04:42:12.612922   17465 addons.go:505] duration metric: took 30.532546083s for enable addons: enabled=[storage-provisioner]
	I0318 04:42:15.194796   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:42:17.200254   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:42:17.200288   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:42:20.196962   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:42:20.197122   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:42:20.214918   17322 logs.go:276] 1 containers: [d454e6154049]
	I0318 04:42:20.215000   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:42:20.227157   17322 logs.go:276] 1 containers: [8046e42578d2]
	I0318 04:42:20.227232   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:42:20.237931   17322 logs.go:276] 4 containers: [0086537aa016 0a040eebb706 367d0316359f 3a24458b86a4]
	I0318 04:42:20.238006   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:42:20.248398   17322 logs.go:276] 1 containers: [894b6a0a0702]
	I0318 04:42:20.248467   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:42:20.259204   17322 logs.go:276] 1 containers: [04d6cdf60161]
	I0318 04:42:20.259272   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:42:20.270402   17322 logs.go:276] 1 containers: [22a920f51952]
	I0318 04:42:20.270477   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:42:20.281136   17322 logs.go:276] 0 containers: []
	W0318 04:42:20.281147   17322 logs.go:278] No container was found matching "kindnet"
	I0318 04:42:20.281205   17322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:42:20.291395   17322 logs.go:276] 1 containers: [5bfb08f2c96a]
	I0318 04:42:20.291418   17322 logs.go:123] Gathering logs for kubelet ...
	I0318 04:42:20.291423   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0318 04:42:20.307370   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:42:20.307463   17322 logs.go:138] Found kubelet problem: Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:42:20.324758   17322 logs.go:123] Gathering logs for kube-apiserver [d454e6154049] ...
	I0318 04:42:20.324764   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d454e6154049"
	I0318 04:42:20.339437   17322 logs.go:123] Gathering logs for etcd [8046e42578d2] ...
	I0318 04:42:20.339448   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8046e42578d2"
	I0318 04:42:20.352921   17322 logs.go:123] Gathering logs for coredns [367d0316359f] ...
	I0318 04:42:20.352931   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 367d0316359f"
	I0318 04:42:20.366570   17322 logs.go:123] Gathering logs for storage-provisioner [5bfb08f2c96a] ...
	I0318 04:42:20.366584   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bfb08f2c96a"
	I0318 04:42:20.378923   17322 logs.go:123] Gathering logs for Docker ...
	I0318 04:42:20.378935   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:42:20.402166   17322 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:42:20.402183   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:42:20.457772   17322 logs.go:123] Gathering logs for kube-scheduler [894b6a0a0702] ...
	I0318 04:42:20.457785   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 894b6a0a0702"
	I0318 04:42:20.473081   17322 logs.go:123] Gathering logs for kube-proxy [04d6cdf60161] ...
	I0318 04:42:20.473093   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 04d6cdf60161"
	I0318 04:42:20.485392   17322 logs.go:123] Gathering logs for coredns [3a24458b86a4] ...
	I0318 04:42:20.485404   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3a24458b86a4"
	I0318 04:42:20.496966   17322 logs.go:123] Gathering logs for container status ...
	I0318 04:42:20.496976   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:42:20.508242   17322 logs.go:123] Gathering logs for dmesg ...
	I0318 04:42:20.508251   17322 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:42:20.512820   17322 logs.go:123] Gathering logs for coredns [0086537aa016] ...
	I0318 04:42:20.512829   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0086537aa016"
	I0318 04:42:20.525034   17322 logs.go:123] Gathering logs for coredns [0a040eebb706] ...
	I0318 04:42:20.525044   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a040eebb706"
	I0318 04:42:20.536579   17322 logs.go:123] Gathering logs for kube-controller-manager [22a920f51952] ...
	I0318 04:42:20.536588   17322 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 22a920f51952"
	I0318 04:42:20.553618   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:42:20.553628   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0318 04:42:20.553652   17322 out.go:239] X Problems detected in kubelet:
	W0318 04:42:20.553656   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: W0318 11:34:35.864044    3738 reflector.go:324] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	W0318 04:42:20.553660   17322 out.go:239]   Mar 18 11:34:35 running-upgrade-738000 kubelet[3738]: E0318 11:34:35.864102    3738 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:running-upgrade-738000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-738000' and this object
	I0318 04:42:20.553664   17322 out.go:304] Setting ErrFile to fd 2...
	I0318 04:42:20.553667   17322 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:42:22.201562   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:42:22.201580   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:42:27.203018   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:42:27.203040   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:42:30.556389   17322 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:42:32.204875   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:42:32.204898   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:42:35.558471   17322 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:42:35.562153   17322 out.go:177] 
	W0318 04:42:35.567783   17322 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0318 04:42:35.567799   17322 out.go:239] * 
	W0318 04:42:35.568924   17322 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:42:35.579872   17322 out.go:177] 
	I0318 04:42:37.206928   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:42:37.206985   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:42:42.208353   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:42:42.208618   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:42:42.240903   17465 logs.go:276] 1 containers: [0a2982ffb84e]
	I0318 04:42:42.240995   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:42:42.266361   17465 logs.go:276] 1 containers: [704a79c3c784]
	I0318 04:42:42.266451   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:42:42.282396   17465 logs.go:276] 2 containers: [fe69be91e435 828d4a376c7e]
	I0318 04:42:42.282465   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:42:42.296769   17465 logs.go:276] 1 containers: [e9fc948a1004]
	I0318 04:42:42.296839   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:42:42.307412   17465 logs.go:276] 1 containers: [894142fdaac1]
	I0318 04:42:42.307482   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:42:42.318375   17465 logs.go:276] 1 containers: [1c9856b2b94f]
	I0318 04:42:42.318445   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:42:42.328466   17465 logs.go:276] 0 containers: []
	W0318 04:42:42.328481   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:42:42.328537   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:42:42.339019   17465 logs.go:276] 1 containers: [a247b21e5185]
	I0318 04:42:42.339036   17465 logs.go:123] Gathering logs for kube-proxy [894142fdaac1] ...
	I0318 04:42:42.339042   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 894142fdaac1"
	I0318 04:42:42.351214   17465 logs.go:123] Gathering logs for storage-provisioner [a247b21e5185] ...
	I0318 04:42:42.351225   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a247b21e5185"
	I0318 04:42:42.363213   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:42:42.363224   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:42:42.388741   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:42:42.388749   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:42:42.402319   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:42:42.402329   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:42:42.439529   17465 logs.go:123] Gathering logs for coredns [828d4a376c7e] ...
	I0318 04:42:42.439540   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828d4a376c7e"
	I0318 04:42:42.451262   17465 logs.go:123] Gathering logs for kube-scheduler [e9fc948a1004] ...
	I0318 04:42:42.451273   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fc948a1004"
	I0318 04:42:42.467646   17465 logs.go:123] Gathering logs for etcd [704a79c3c784] ...
	I0318 04:42:42.467656   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 704a79c3c784"
	I0318 04:42:42.481749   17465 logs.go:123] Gathering logs for coredns [fe69be91e435] ...
	I0318 04:42:42.481758   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe69be91e435"
	I0318 04:42:42.493026   17465 logs.go:123] Gathering logs for kube-controller-manager [1c9856b2b94f] ...
	I0318 04:42:42.493037   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c9856b2b94f"
	I0318 04:42:42.511836   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:42:42.511846   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:42:42.546724   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:42:42.546735   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:42:42.551145   17465 logs.go:123] Gathering logs for kube-apiserver [0a2982ffb84e] ...
	I0318 04:42:42.551152   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2982ffb84e"
	I0318 04:42:45.067783   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	
	
	==> Docker <==
	-- Journal begins at Mon 2024-03-18 11:33:30 UTC, ends at Mon 2024-03-18 11:42:51 UTC. --
	Mar 18 11:42:28 running-upgrade-738000 cri-dockerd[3057]: time="2024-03-18T11:42:28Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Mar 18 11:42:32 running-upgrade-738000 cri-dockerd[3057]: time="2024-03-18T11:42:32Z" level=error msg="ContainerStats resp: {0x400097f680 linux}"
	Mar 18 11:42:32 running-upgrade-738000 cri-dockerd[3057]: time="2024-03-18T11:42:32Z" level=error msg="ContainerStats resp: {0x400088d100 linux}"
	Mar 18 11:42:33 running-upgrade-738000 cri-dockerd[3057]: time="2024-03-18T11:42:33Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Mar 18 11:42:33 running-upgrade-738000 cri-dockerd[3057]: time="2024-03-18T11:42:33Z" level=error msg="ContainerStats resp: {0x400092c740 linux}"
	Mar 18 11:42:34 running-upgrade-738000 cri-dockerd[3057]: time="2024-03-18T11:42:34Z" level=error msg="ContainerStats resp: {0x400092d380 linux}"
	Mar 18 11:42:34 running-upgrade-738000 cri-dockerd[3057]: time="2024-03-18T11:42:34Z" level=error msg="ContainerStats resp: {0x400092d980 linux}"
	Mar 18 11:42:34 running-upgrade-738000 cri-dockerd[3057]: time="2024-03-18T11:42:34Z" level=error msg="ContainerStats resp: {0x40005d7000 linux}"
	Mar 18 11:42:34 running-upgrade-738000 cri-dockerd[3057]: time="2024-03-18T11:42:34Z" level=error msg="ContainerStats resp: {0x40008ec680 linux}"
	Mar 18 11:42:34 running-upgrade-738000 cri-dockerd[3057]: time="2024-03-18T11:42:34Z" level=error msg="ContainerStats resp: {0x40005d7680 linux}"
	Mar 18 11:42:34 running-upgrade-738000 cri-dockerd[3057]: time="2024-03-18T11:42:34Z" level=error msg="ContainerStats resp: {0x40005d6040 linux}"
	Mar 18 11:42:34 running-upgrade-738000 cri-dockerd[3057]: time="2024-03-18T11:42:34Z" level=error msg="ContainerStats resp: {0x40008ec380 linux}"
	Mar 18 11:42:38 running-upgrade-738000 cri-dockerd[3057]: time="2024-03-18T11:42:38Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Mar 18 11:42:43 running-upgrade-738000 cri-dockerd[3057]: time="2024-03-18T11:42:43Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Mar 18 11:42:44 running-upgrade-738000 cri-dockerd[3057]: time="2024-03-18T11:42:44Z" level=error msg="ContainerStats resp: {0x400088cf80 linux}"
	Mar 18 11:42:44 running-upgrade-738000 cri-dockerd[3057]: time="2024-03-18T11:42:44Z" level=error msg="ContainerStats resp: {0x400097fdc0 linux}"
	Mar 18 11:42:45 running-upgrade-738000 cri-dockerd[3057]: time="2024-03-18T11:42:45Z" level=error msg="ContainerStats resp: {0x400092cd00 linux}"
	Mar 18 11:42:46 running-upgrade-738000 cri-dockerd[3057]: time="2024-03-18T11:42:46Z" level=error msg="ContainerStats resp: {0x400092d9c0 linux}"
	Mar 18 11:42:46 running-upgrade-738000 cri-dockerd[3057]: time="2024-03-18T11:42:46Z" level=error msg="ContainerStats resp: {0x400092de00 linux}"
	Mar 18 11:42:46 running-upgrade-738000 cri-dockerd[3057]: time="2024-03-18T11:42:46Z" level=error msg="ContainerStats resp: {0x40007ff640 linux}"
	Mar 18 11:42:46 running-upgrade-738000 cri-dockerd[3057]: time="2024-03-18T11:42:46Z" level=error msg="ContainerStats resp: {0x40007ffb00 linux}"
	Mar 18 11:42:46 running-upgrade-738000 cri-dockerd[3057]: time="2024-03-18T11:42:46Z" level=error msg="ContainerStats resp: {0x40007fff00 linux}"
	Mar 18 11:42:46 running-upgrade-738000 cri-dockerd[3057]: time="2024-03-18T11:42:46Z" level=error msg="ContainerStats resp: {0x40008ece80 linux}"
	Mar 18 11:42:46 running-upgrade-738000 cri-dockerd[3057]: time="2024-03-18T11:42:46Z" level=error msg="ContainerStats resp: {0x40008ed3c0 linux}"
	Mar 18 11:42:48 running-upgrade-738000 cri-dockerd[3057]: time="2024-03-18T11:42:48Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	9fb84f3244d39       edaa71f2aee88       29 seconds ago      Running             coredns                   2                   5dce3be5aacc4
	acf8308ffb877       edaa71f2aee88       29 seconds ago      Running             coredns                   2                   b2d58f8642607
	0086537aa016c       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   5dce3be5aacc4
	0a040eebb7063       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   b2d58f8642607
	04d6cdf601614       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   deb1c85e1844a
	5bfb08f2c96a1       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   cdcc78116bae1
	8046e42578d2c       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   7356fd90b0f36
	894b6a0a07023       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   e959874e1c795
	d454e61540490       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   408f67ec46dde
	22a920f51952c       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   be82d26d8de26
	
	
	==> coredns [0086537aa016] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5577246039766204941.7432552958850168004. HINFO: read udp 10.244.0.2:47633->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5577246039766204941.7432552958850168004. HINFO: read udp 10.244.0.2:53565->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5577246039766204941.7432552958850168004. HINFO: read udp 10.244.0.2:55584->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5577246039766204941.7432552958850168004. HINFO: read udp 10.244.0.2:37356->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5577246039766204941.7432552958850168004. HINFO: read udp 10.244.0.2:38482->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5577246039766204941.7432552958850168004. HINFO: read udp 10.244.0.2:35281->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5577246039766204941.7432552958850168004. HINFO: read udp 10.244.0.2:53007->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5577246039766204941.7432552958850168004. HINFO: read udp 10.244.0.2:34689->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5577246039766204941.7432552958850168004. HINFO: read udp 10.244.0.2:52029->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5577246039766204941.7432552958850168004. HINFO: read udp 10.244.0.2:53561->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [0a040eebb706] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 6870331716304721980.2275063744941704997. HINFO: read udp 10.244.0.3:42868->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6870331716304721980.2275063744941704997. HINFO: read udp 10.244.0.3:44805->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6870331716304721980.2275063744941704997. HINFO: read udp 10.244.0.3:35167->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6870331716304721980.2275063744941704997. HINFO: read udp 10.244.0.3:38409->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6870331716304721980.2275063744941704997. HINFO: read udp 10.244.0.3:40864->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6870331716304721980.2275063744941704997. HINFO: read udp 10.244.0.3:53806->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6870331716304721980.2275063744941704997. HINFO: read udp 10.244.0.3:51106->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6870331716304721980.2275063744941704997. HINFO: read udp 10.244.0.3:54828->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6870331716304721980.2275063744941704997. HINFO: read udp 10.244.0.3:44089->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 6870331716304721980.2275063744941704997. HINFO: read udp 10.244.0.3:44426->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [9fb84f3244d3] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5782632717577849363.456643963989378394. HINFO: read udp 10.244.0.2:56757->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5782632717577849363.456643963989378394. HINFO: read udp 10.244.0.2:55593->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5782632717577849363.456643963989378394. HINFO: read udp 10.244.0.2:34670->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5782632717577849363.456643963989378394. HINFO: read udp 10.244.0.2:60608->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5782632717577849363.456643963989378394. HINFO: read udp 10.244.0.2:57937->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5782632717577849363.456643963989378394. HINFO: read udp 10.244.0.2:41148->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5782632717577849363.456643963989378394. HINFO: read udp 10.244.0.2:43362->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5782632717577849363.456643963989378394. HINFO: read udp 10.244.0.2:38637->10.0.2.3:53: i/o timeout
	
	
	==> coredns [acf8308ffb87] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 3483230521700229039.99927714307096482. HINFO: read udp 10.244.0.3:56453->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3483230521700229039.99927714307096482. HINFO: read udp 10.244.0.3:34226->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3483230521700229039.99927714307096482. HINFO: read udp 10.244.0.3:48466->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3483230521700229039.99927714307096482. HINFO: read udp 10.244.0.3:54559->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 3483230521700229039.99927714307096482. HINFO: read udp 10.244.0.3:47289->10.0.2.3:53: i/o timeout
	
	
	==> describe nodes <==
	Name:               running-upgrade-738000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-738000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	                    minikube.k8s.io/name=running-upgrade-738000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T04_38_31_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 11:38:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-738000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 11:42:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 11:38:31 +0000   Mon, 18 Mar 2024 11:38:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 11:38:31 +0000   Mon, 18 Mar 2024 11:38:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 11:38:31 +0000   Mon, 18 Mar 2024 11:38:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 11:38:31 +0000   Mon, 18 Mar 2024 11:38:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-738000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 e095ab296f27450197a961436ab138e9
	  System UUID:                e095ab296f27450197a961436ab138e9
	  Boot ID:                    0a70486c-2fd3-4638-8078-608ea838cd2d
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-lcgj5                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m7s
	  kube-system                 coredns-6d4b75cb6d-zzkpj                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m7s
	  kube-system                 etcd-running-upgrade-738000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m20s
	  kube-system                 kube-apiserver-running-upgrade-738000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m20s
	  kube-system                 kube-controller-manager-running-upgrade-738000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m21s
	  kube-system                 kube-proxy-8lmww                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m8s
	  kube-system                 kube-scheduler-running-upgrade-738000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m20s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m7s   kube-proxy       
	  Normal  NodeReady                4m20s  kubelet          Node running-upgrade-738000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m20s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m20s  kubelet          Node running-upgrade-738000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m20s  kubelet          Node running-upgrade-738000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m20s  kubelet          Node running-upgrade-738000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m20s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m8s   node-controller  Node running-upgrade-738000 event: Registered Node running-upgrade-738000 in Controller
	
	
	==> dmesg <==
	[  +1.815073] systemd-fstab-generator[875]: Ignoring "noauto" for root device
	[  +0.065677] systemd-fstab-generator[886]: Ignoring "noauto" for root device
	[  +0.063203] systemd-fstab-generator[897]: Ignoring "noauto" for root device
	[  +1.152127] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.075347] systemd-fstab-generator[1047]: Ignoring "noauto" for root device
	[  +0.055791] systemd-fstab-generator[1058]: Ignoring "noauto" for root device
	[  +2.754826] systemd-fstab-generator[1289]: Ignoring "noauto" for root device
	[Mar18 11:34] systemd-fstab-generator[1943]: Ignoring "noauto" for root device
	[  +2.862296] systemd-fstab-generator[2222]: Ignoring "noauto" for root device
	[  +0.127052] systemd-fstab-generator[2258]: Ignoring "noauto" for root device
	[  +0.079000] systemd-fstab-generator[2269]: Ignoring "noauto" for root device
	[  +0.083276] systemd-fstab-generator[2282]: Ignoring "noauto" for root device
	[  +3.388592] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.207184] systemd-fstab-generator[3012]: Ignoring "noauto" for root device
	[  +0.083967] systemd-fstab-generator[3025]: Ignoring "noauto" for root device
	[  +0.062587] systemd-fstab-generator[3036]: Ignoring "noauto" for root device
	[  +0.078269] systemd-fstab-generator[3050]: Ignoring "noauto" for root device
	[  +2.076522] systemd-fstab-generator[3205]: Ignoring "noauto" for root device
	[  +5.737771] systemd-fstab-generator[3603]: Ignoring "noauto" for root device
	[  +1.244096] systemd-fstab-generator[3732]: Ignoring "noauto" for root device
	[ +20.542889] kauditd_printk_skb: 68 callbacks suppressed
	[Mar18 11:38] kauditd_printk_skb: 23 callbacks suppressed
	[  +1.394316] systemd-fstab-generator[10394]: Ignoring "noauto" for root device
	[  +5.632015] systemd-fstab-generator[10981]: Ignoring "noauto" for root device
	[  +0.471962] systemd-fstab-generator[11115]: Ignoring "noauto" for root device
	
	
	==> etcd [8046e42578d2] <==
	{"level":"info","ts":"2024-03-18T11:38:26.396Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-18T11:38:26.396Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-18T11:38:26.395Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-03-18T11:38:26.395Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-03-18T11:38:26.396Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-03-18T11:38:26.396Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-03-18T11:38:26.396Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-03-18T11:38:26.992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-18T11:38:26.992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-18T11:38:26.992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-03-18T11:38:26.992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-03-18T11:38:26.992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-03-18T11:38:26.992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-03-18T11:38:26.992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-03-18T11:38:26.992Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T11:38:26.993Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T11:38:26.993Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T11:38:26.993Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T11:38:26.993Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-738000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-18T11:38:26.993Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T11:38:26.993Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T11:38:26.994Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-18T11:38:26.994Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-18T11:38:26.994Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-03-18T11:38:26.998Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 11:42:51 up 9 min,  0 users,  load average: 0.53, 0.39, 0.20
	Linux running-upgrade-738000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [d454e6154049] <==
	I0318 11:38:28.188721       1 controller.go:611] quota admission added evaluator for: namespaces
	I0318 11:38:28.229984       1 cache.go:39] Caches are synced for autoregister controller
	I0318 11:38:28.229987       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0318 11:38:28.229992       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0318 11:38:28.230366       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0318 11:38:28.232012       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0318 11:38:28.237522       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0318 11:38:28.250098       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0318 11:38:28.963841       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0318 11:38:29.133814       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0318 11:38:29.135674       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0318 11:38:29.135715       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0318 11:38:29.262823       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0318 11:38:29.278605       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0318 11:38:29.396078       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0318 11:38:29.398309       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0318 11:38:29.398782       1 controller.go:611] quota admission added evaluator for: endpoints
	I0318 11:38:29.400596       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0318 11:38:30.264909       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0318 11:38:30.981986       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0318 11:38:30.985709       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0318 11:38:30.998850       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0318 11:38:43.770903       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0318 11:38:43.823149       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0318 11:38:44.318266       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [22a920f51952] <==
	I0318 11:38:43.103233       1 shared_informer.go:262] Caches are synced for job
	I0318 11:38:43.106379       1 shared_informer.go:262] Caches are synced for deployment
	I0318 11:38:43.107488       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0318 11:38:43.112721       1 shared_informer.go:262] Caches are synced for expand
	I0318 11:38:43.114894       1 shared_informer.go:262] Caches are synced for PV protection
	I0318 11:38:43.114910       1 shared_informer.go:262] Caches are synced for cronjob
	I0318 11:38:43.114985       1 shared_informer.go:262] Caches are synced for service account
	I0318 11:38:43.115088       1 shared_informer.go:262] Caches are synced for HPA
	I0318 11:38:43.116045       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I0318 11:38:43.118390       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0318 11:38:43.119771       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0318 11:38:43.123235       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0318 11:38:43.126434       1 shared_informer.go:262] Caches are synced for namespace
	I0318 11:38:43.126465       1 shared_informer.go:262] Caches are synced for PVC protection
	I0318 11:38:43.164122       1 shared_informer.go:262] Caches are synced for disruption
	I0318 11:38:43.164149       1 disruption.go:371] Sending events to api server.
	I0318 11:38:43.321652       1 shared_informer.go:262] Caches are synced for resource quota
	I0318 11:38:43.371998       1 shared_informer.go:262] Caches are synced for resource quota
	I0318 11:38:43.737964       1 shared_informer.go:262] Caches are synced for garbage collector
	I0318 11:38:43.773647       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-8lmww"
	I0318 11:38:43.822169       1 shared_informer.go:262] Caches are synced for garbage collector
	I0318 11:38:43.822206       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0318 11:38:43.824316       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0318 11:38:44.122508       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-zzkpj"
	I0318 11:38:44.128599       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-lcgj5"
	
	
	==> kube-proxy [04d6cdf60161] <==
	I0318 11:38:44.305548       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0318 11:38:44.305574       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0318 11:38:44.305584       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0318 11:38:44.316167       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0318 11:38:44.316183       1 server_others.go:206] "Using iptables Proxier"
	I0318 11:38:44.316200       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0318 11:38:44.316331       1 server.go:661] "Version info" version="v1.24.1"
	I0318 11:38:44.316372       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 11:38:44.316674       1 config.go:317] "Starting service config controller"
	I0318 11:38:44.316686       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0318 11:38:44.316746       1 config.go:226] "Starting endpoint slice config controller"
	I0318 11:38:44.316752       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0318 11:38:44.317032       1 config.go:444] "Starting node config controller"
	I0318 11:38:44.317061       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0318 11:38:44.417582       1 shared_informer.go:262] Caches are synced for service config
	I0318 11:38:44.417582       1 shared_informer.go:262] Caches are synced for node config
	I0318 11:38:44.417593       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [894b6a0a0702] <==
	W0318 11:38:28.196651       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0318 11:38:28.196660       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0318 11:38:28.196717       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0318 11:38:28.196745       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0318 11:38:28.196769       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0318 11:38:28.196777       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0318 11:38:28.196798       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0318 11:38:28.196806       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0318 11:38:28.196832       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0318 11:38:28.196838       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0318 11:38:28.197103       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0318 11:38:28.197134       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0318 11:38:28.197139       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0318 11:38:28.197158       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0318 11:38:28.197108       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0318 11:38:28.197164       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0318 11:38:29.084315       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0318 11:38:29.084343       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0318 11:38:29.109286       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0318 11:38:29.109306       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0318 11:38:29.118983       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0318 11:38:29.119059       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0318 11:38:29.214259       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0318 11:38:29.214347       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0318 11:38:29.394151       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Mon 2024-03-18 11:33:30 UTC, ends at Mon 2024-03-18 11:42:52 UTC. --
	Mar 18 11:38:33 running-upgrade-738000 kubelet[10987]: I0318 11:38:33.214840   10987 request.go:601] Waited for 1.13631515s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Mar 18 11:38:33 running-upgrade-738000 kubelet[10987]: E0318 11:38:33.218060   10987 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-738000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-738000"
	Mar 18 11:38:43 running-upgrade-738000 kubelet[10987]: I0318 11:38:43.079803   10987 topology_manager.go:200] "Topology Admit Handler"
	Mar 18 11:38:43 running-upgrade-738000 kubelet[10987]: I0318 11:38:43.133918   10987 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 18 11:38:43 running-upgrade-738000 kubelet[10987]: I0318 11:38:43.134056   10987 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5b8ae365-41dc-4230-aa27-60d1a49cc697-tmp\") pod \"storage-provisioner\" (UID: \"5b8ae365-41dc-4230-aa27-60d1a49cc697\") " pod="kube-system/storage-provisioner"
	Mar 18 11:38:43 running-upgrade-738000 kubelet[10987]: I0318 11:38:43.134070   10987 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bztfh\" (UniqueName: \"kubernetes.io/projected/5b8ae365-41dc-4230-aa27-60d1a49cc697-kube-api-access-bztfh\") pod \"storage-provisioner\" (UID: \"5b8ae365-41dc-4230-aa27-60d1a49cc697\") " pod="kube-system/storage-provisioner"
	Mar 18 11:38:43 running-upgrade-738000 kubelet[10987]: I0318 11:38:43.134314   10987 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 18 11:38:43 running-upgrade-738000 kubelet[10987]: E0318 11:38:43.238211   10987 projected.go:286] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Mar 18 11:38:43 running-upgrade-738000 kubelet[10987]: E0318 11:38:43.238258   10987 projected.go:192] Error preparing data for projected volume kube-api-access-bztfh for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found
	Mar 18 11:38:43 running-upgrade-738000 kubelet[10987]: E0318 11:38:43.238297   10987 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/projected/5b8ae365-41dc-4230-aa27-60d1a49cc697-kube-api-access-bztfh podName:5b8ae365-41dc-4230-aa27-60d1a49cc697 nodeName:}" failed. No retries permitted until 2024-03-18 11:38:43.738283574 +0000 UTC m=+12.769218925 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bztfh" (UniqueName: "kubernetes.io/projected/5b8ae365-41dc-4230-aa27-60d1a49cc697-kube-api-access-bztfh") pod "storage-provisioner" (UID: "5b8ae365-41dc-4230-aa27-60d1a49cc697") : configmap "kube-root-ca.crt" not found
	Mar 18 11:38:43 running-upgrade-738000 kubelet[10987]: I0318 11:38:43.775594   10987 topology_manager.go:200] "Topology Admit Handler"
	Mar 18 11:38:43 running-upgrade-738000 kubelet[10987]: I0318 11:38:43.839748   10987 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/53d28c5c-0c2f-4005-8b2d-1e953b7241e1-kube-proxy\") pod \"kube-proxy-8lmww\" (UID: \"53d28c5c-0c2f-4005-8b2d-1e953b7241e1\") " pod="kube-system/kube-proxy-8lmww"
	Mar 18 11:38:43 running-upgrade-738000 kubelet[10987]: I0318 11:38:43.940503   10987 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/53d28c5c-0c2f-4005-8b2d-1e953b7241e1-lib-modules\") pod \"kube-proxy-8lmww\" (UID: \"53d28c5c-0c2f-4005-8b2d-1e953b7241e1\") " pod="kube-system/kube-proxy-8lmww"
	Mar 18 11:38:43 running-upgrade-738000 kubelet[10987]: I0318 11:38:43.940533   10987 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/53d28c5c-0c2f-4005-8b2d-1e953b7241e1-xtables-lock\") pod \"kube-proxy-8lmww\" (UID: \"53d28c5c-0c2f-4005-8b2d-1e953b7241e1\") " pod="kube-system/kube-proxy-8lmww"
	Mar 18 11:38:43 running-upgrade-738000 kubelet[10987]: I0318 11:38:43.940546   10987 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f92rz\" (UniqueName: \"kubernetes.io/projected/53d28c5c-0c2f-4005-8b2d-1e953b7241e1-kube-api-access-f92rz\") pod \"kube-proxy-8lmww\" (UID: \"53d28c5c-0c2f-4005-8b2d-1e953b7241e1\") " pod="kube-system/kube-proxy-8lmww"
	Mar 18 11:38:44 running-upgrade-738000 kubelet[10987]: I0318 11:38:44.127178   10987 topology_manager.go:200] "Topology Admit Handler"
	Mar 18 11:38:44 running-upgrade-738000 kubelet[10987]: I0318 11:38:44.134186   10987 topology_manager.go:200] "Topology Admit Handler"
	Mar 18 11:38:44 running-upgrade-738000 kubelet[10987]: I0318 11:38:44.142894   10987 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75644e7c-0729-487b-84e7-111f802c9d82-config-volume\") pod \"coredns-6d4b75cb6d-zzkpj\" (UID: \"75644e7c-0729-487b-84e7-111f802c9d82\") " pod="kube-system/coredns-6d4b75cb6d-zzkpj"
	Mar 18 11:38:44 running-upgrade-738000 kubelet[10987]: I0318 11:38:44.142911   10987 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b77e772-b43d-49a7-9585-8cf094837866-config-volume\") pod \"coredns-6d4b75cb6d-lcgj5\" (UID: \"9b77e772-b43d-49a7-9585-8cf094837866\") " pod="kube-system/coredns-6d4b75cb6d-lcgj5"
	Mar 18 11:38:44 running-upgrade-738000 kubelet[10987]: I0318 11:38:44.142924   10987 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rblmg\" (UniqueName: \"kubernetes.io/projected/75644e7c-0729-487b-84e7-111f802c9d82-kube-api-access-rblmg\") pod \"coredns-6d4b75cb6d-zzkpj\" (UID: \"75644e7c-0729-487b-84e7-111f802c9d82\") " pod="kube-system/coredns-6d4b75cb6d-zzkpj"
	Mar 18 11:38:44 running-upgrade-738000 kubelet[10987]: I0318 11:38:44.142936   10987 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxgcq\" (UniqueName: \"kubernetes.io/projected/9b77e772-b43d-49a7-9585-8cf094837866-kube-api-access-gxgcq\") pod \"coredns-6d4b75cb6d-lcgj5\" (UID: \"9b77e772-b43d-49a7-9585-8cf094837866\") " pod="kube-system/coredns-6d4b75cb6d-lcgj5"
	Mar 18 11:38:44 running-upgrade-738000 kubelet[10987]: I0318 11:38:44.217180   10987 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="deb1c85e1844ab1bfdf113804f429cc06fa0ddcd73acc2f70ea3bde65d0b0c4b"
	Mar 18 11:38:44 running-upgrade-738000 kubelet[10987]: I0318 11:38:44.229183   10987 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="cdcc78116bae1c55818ae333643e91be8d854e1d75e087ed43bf501efdff7316"
	Mar 18 11:42:22 running-upgrade-738000 kubelet[10987]: I0318 11:42:22.492005   10987 scope.go:110] "RemoveContainer" containerID="367d0316359f80538fb2aa3458e59a0ee9bd46105a23247b0bc086cecab7dfb7"
	Mar 18 11:42:22 running-upgrade-738000 kubelet[10987]: I0318 11:42:22.503904   10987 scope.go:110] "RemoveContainer" containerID="3a24458b86a420731b053f560337ed6a01bf590ae2a3ed2002a7aef4f0aaffda"
	
	
	==> storage-provisioner [5bfb08f2c96a] <==
	I0318 11:38:44.238707       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0318 11:38:44.260868       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0318 11:38:44.260894       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0318 11:38:44.264711       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0318 11:38:44.264875       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-738000_e28cb2b7-1cb7-451c-a62f-4a911875ff9e!
	I0318 11:38:44.265326       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b651d80e-2565-4f49-a987-cf3c7e831558", APIVersion:"v1", ResourceVersion:"361", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-738000_e28cb2b7-1cb7-451c-a62f-4a911875ff9e became leader
	I0318 11:38:44.365694       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-738000_e28cb2b7-1cb7-451c-a62f-4a911875ff9e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-738000 -n running-upgrade-738000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-738000 -n running-upgrade-738000: exit status 2 (15.738320959s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-738000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-738000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-738000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-arm64 delete -p running-upgrade-738000: (2.314592375s)
--- FAIL: TestRunningBinaryUpgrade (633.31s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.77s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-311000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-311000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.858265791s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-311000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-311000" primary control-plane node in "kubernetes-upgrade-311000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-311000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:35:36.940684   17386 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:35:36.941079   17386 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:35:36.941084   17386 out.go:304] Setting ErrFile to fd 2...
	I0318 04:35:36.941086   17386 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:35:36.941281   17386 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:35:36.942990   17386 out.go:298] Setting JSON to false
	I0318 04:35:36.959596   17386 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":9309,"bootTime":1710752427,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:35:36.959654   17386 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:35:36.971181   17386 out.go:177] * [kubernetes-upgrade-311000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:35:36.981147   17386 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 04:35:36.981191   17386 notify.go:220] Checking for updates...
	I0318 04:35:36.985240   17386 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:35:36.989117   17386 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:35:36.992141   17386 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:35:36.996176   17386 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	I0318 04:35:36.999220   17386 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:35:37.002633   17386 config.go:182] Loaded profile config "multinode-969000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:35:37.002704   17386 config.go:182] Loaded profile config "running-upgrade-738000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 04:35:37.002747   17386 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:35:37.007128   17386 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 04:35:37.014062   17386 start.go:297] selected driver: qemu2
	I0318 04:35:37.014067   17386 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:35:37.014072   17386 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:35:37.016248   17386 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:35:37.019156   17386 out.go:177] * Automatically selected the socket_vmnet network
	I0318 04:35:37.023271   17386 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 04:35:37.023299   17386 cni.go:84] Creating CNI manager for ""
	I0318 04:35:37.023306   17386 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0318 04:35:37.023336   17386 start.go:340] cluster config:
	{Name:kubernetes-upgrade-311000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-311000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:35:37.027593   17386 iso.go:125] acquiring lock: {Name:mkb8143674083e0c7a46a3ed751b3800392bcd24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:35:37.036212   17386 out.go:177] * Starting "kubernetes-upgrade-311000" primary control-plane node in "kubernetes-upgrade-311000" cluster
	I0318 04:35:37.040185   17386 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0318 04:35:37.040207   17386 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0318 04:35:37.040215   17386 cache.go:56] Caching tarball of preloaded images
	I0318 04:35:37.040280   17386 preload.go:173] Found /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:35:37.040286   17386 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0318 04:35:37.040350   17386 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/kubernetes-upgrade-311000/config.json ...
	I0318 04:35:37.040360   17386 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/kubernetes-upgrade-311000/config.json: {Name:mk7b127793aedb4c070f808e9f6b8720b9893b1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:35:37.040565   17386 start.go:360] acquireMachinesLock for kubernetes-upgrade-311000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:35:37.040596   17386 start.go:364] duration metric: took 23.584µs to acquireMachinesLock for "kubernetes-upgrade-311000"
	I0318 04:35:37.040608   17386 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-311000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-311000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:35:37.040641   17386 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:35:37.049138   17386 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 04:35:37.076571   17386 start.go:159] libmachine.API.Create for "kubernetes-upgrade-311000" (driver="qemu2")
	I0318 04:35:37.076601   17386 client.go:168] LocalClient.Create starting
	I0318 04:35:37.076675   17386 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem
	I0318 04:35:37.076707   17386 main.go:141] libmachine: Decoding PEM data...
	I0318 04:35:37.076718   17386 main.go:141] libmachine: Parsing certificate...
	I0318 04:35:37.076762   17386 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem
	I0318 04:35:37.076783   17386 main.go:141] libmachine: Decoding PEM data...
	I0318 04:35:37.076789   17386 main.go:141] libmachine: Parsing certificate...
	I0318 04:35:37.077120   17386 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18429-15072/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:35:37.258111   17386 main.go:141] libmachine: Creating SSH key...
	I0318 04:35:37.312329   17386 main.go:141] libmachine: Creating Disk image...
	I0318 04:35:37.312335   17386 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:35:37.312513   17386 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kubernetes-upgrade-311000/disk.qcow2.raw /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kubernetes-upgrade-311000/disk.qcow2
	I0318 04:35:37.325757   17386 main.go:141] libmachine: STDOUT: 
	I0318 04:35:37.325777   17386 main.go:141] libmachine: STDERR: 
	I0318 04:35:37.325840   17386 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kubernetes-upgrade-311000/disk.qcow2 +20000M
	I0318 04:35:37.336726   17386 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:35:37.336742   17386 main.go:141] libmachine: STDERR: 
	I0318 04:35:37.336762   17386 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kubernetes-upgrade-311000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kubernetes-upgrade-311000/disk.qcow2
	I0318 04:35:37.336767   17386 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:35:37.336803   17386 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kubernetes-upgrade-311000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kubernetes-upgrade-311000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kubernetes-upgrade-311000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5e:dd:e7:27:21:9c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kubernetes-upgrade-311000/disk.qcow2
	I0318 04:35:37.338574   17386 main.go:141] libmachine: STDOUT: 
	I0318 04:35:37.338588   17386 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:35:37.338608   17386 client.go:171] duration metric: took 262.010125ms to LocalClient.Create
	I0318 04:35:39.340711   17386 start.go:128] duration metric: took 2.300136167s to createHost
	I0318 04:35:39.340769   17386 start.go:83] releasing machines lock for "kubernetes-upgrade-311000", held for 2.300226417s
	W0318 04:35:39.340801   17386 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:35:39.349561   17386 out.go:177] * Deleting "kubernetes-upgrade-311000" in qemu2 ...
	W0318 04:35:39.369078   17386 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:35:39.369090   17386 start.go:728] Will try again in 5 seconds ...
	I0318 04:35:44.371237   17386 start.go:360] acquireMachinesLock for kubernetes-upgrade-311000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:35:44.371770   17386 start.go:364] duration metric: took 381.417µs to acquireMachinesLock for "kubernetes-upgrade-311000"
	I0318 04:35:44.371883   17386 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-311000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-311000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:35:44.372191   17386 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:35:44.380763   17386 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 04:35:44.425494   17386 start.go:159] libmachine.API.Create for "kubernetes-upgrade-311000" (driver="qemu2")
	I0318 04:35:44.425547   17386 client.go:168] LocalClient.Create starting
	I0318 04:35:44.425655   17386 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem
	I0318 04:35:44.425739   17386 main.go:141] libmachine: Decoding PEM data...
	I0318 04:35:44.425756   17386 main.go:141] libmachine: Parsing certificate...
	I0318 04:35:44.425819   17386 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem
	I0318 04:35:44.425861   17386 main.go:141] libmachine: Decoding PEM data...
	I0318 04:35:44.425870   17386 main.go:141] libmachine: Parsing certificate...
	I0318 04:35:44.426470   17386 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18429-15072/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:35:44.573133   17386 main.go:141] libmachine: Creating SSH key...
	I0318 04:35:44.694494   17386 main.go:141] libmachine: Creating Disk image...
	I0318 04:35:44.694501   17386 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:35:44.694709   17386 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kubernetes-upgrade-311000/disk.qcow2.raw /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kubernetes-upgrade-311000/disk.qcow2
	I0318 04:35:44.707241   17386 main.go:141] libmachine: STDOUT: 
	I0318 04:35:44.707259   17386 main.go:141] libmachine: STDERR: 
	I0318 04:35:44.707317   17386 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kubernetes-upgrade-311000/disk.qcow2 +20000M
	I0318 04:35:44.718404   17386 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:35:44.718420   17386 main.go:141] libmachine: STDERR: 
	I0318 04:35:44.718444   17386 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kubernetes-upgrade-311000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kubernetes-upgrade-311000/disk.qcow2
	I0318 04:35:44.718448   17386 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:35:44.718484   17386 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kubernetes-upgrade-311000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kubernetes-upgrade-311000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kubernetes-upgrade-311000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:c7:06:be:f1:40 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kubernetes-upgrade-311000/disk.qcow2
	I0318 04:35:44.720185   17386 main.go:141] libmachine: STDOUT: 
	I0318 04:35:44.720211   17386 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:35:44.720223   17386 client.go:171] duration metric: took 294.679708ms to LocalClient.Create
	I0318 04:35:46.722367   17386 start.go:128] duration metric: took 2.350219541s to createHost
	I0318 04:35:46.722501   17386 start.go:83] releasing machines lock for "kubernetes-upgrade-311000", held for 2.350747792s
	W0318 04:35:46.722861   17386 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-311000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-311000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:35:46.736800   17386 out.go:177] 
	W0318 04:35:46.741704   17386 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:35:46.741764   17386 out.go:239] * 
	* 
	W0318 04:35:46.744516   17386 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:35:46.756553   17386 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-311000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-311000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-311000: (3.465154125s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-311000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-311000 status --format={{.Host}}: exit status 7 (54.497ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-311000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-311000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.187801375s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-311000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-311000" primary control-plane node in "kubernetes-upgrade-311000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-311000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-311000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:35:50.319038   17425 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:35:50.319189   17425 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:35:50.319193   17425 out.go:304] Setting ErrFile to fd 2...
	I0318 04:35:50.319196   17425 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:35:50.319333   17425 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:35:50.320399   17425 out.go:298] Setting JSON to false
	I0318 04:35:50.336880   17425 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":9323,"bootTime":1710752427,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:35:50.336957   17425 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:35:50.341420   17425 out.go:177] * [kubernetes-upgrade-311000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:35:50.348175   17425 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 04:35:50.348223   17425 notify.go:220] Checking for updates...
	I0318 04:35:50.352217   17425 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:35:50.356127   17425 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:35:50.360217   17425 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:35:50.363250   17425 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	I0318 04:35:50.366158   17425 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:35:50.369448   17425 config.go:182] Loaded profile config "kubernetes-upgrade-311000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0318 04:35:50.369708   17425 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:35:50.374143   17425 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 04:35:50.381115   17425 start.go:297] selected driver: qemu2
	I0318 04:35:50.381121   17425 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-311000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-311000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:35:50.381173   17425 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:35:50.383398   17425 cni.go:84] Creating CNI manager for ""
	I0318 04:35:50.383416   17425 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:35:50.383441   17425 start.go:340] cluster config:
	{Name:kubernetes-upgrade-311000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-311000 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:35:50.387540   17425 iso.go:125] acquiring lock: {Name:mkb8143674083e0c7a46a3ed751b3800392bcd24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:35:50.396181   17425 out.go:177] * Starting "kubernetes-upgrade-311000" primary control-plane node in "kubernetes-upgrade-311000" cluster
	I0318 04:35:50.400131   17425 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0318 04:35:50.400144   17425 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0318 04:35:50.400153   17425 cache.go:56] Caching tarball of preloaded images
	I0318 04:35:50.400202   17425 preload.go:173] Found /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:35:50.400212   17425 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on docker
	I0318 04:35:50.400267   17425 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/kubernetes-upgrade-311000/config.json ...
	I0318 04:35:50.400678   17425 start.go:360] acquireMachinesLock for kubernetes-upgrade-311000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:35:50.400702   17425 start.go:364] duration metric: took 17.792µs to acquireMachinesLock for "kubernetes-upgrade-311000"
	I0318 04:35:50.400710   17425 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:35:50.400715   17425 fix.go:54] fixHost starting: 
	I0318 04:35:50.400823   17425 fix.go:112] recreateIfNeeded on kubernetes-upgrade-311000: state=Stopped err=<nil>
	W0318 04:35:50.400831   17425 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:35:50.409127   17425 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-311000" ...
	I0318 04:35:50.413061   17425 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kubernetes-upgrade-311000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kubernetes-upgrade-311000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kubernetes-upgrade-311000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:c7:06:be:f1:40 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kubernetes-upgrade-311000/disk.qcow2
	I0318 04:35:50.414958   17425 main.go:141] libmachine: STDOUT: 
	I0318 04:35:50.414976   17425 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:35:50.415002   17425 fix.go:56] duration metric: took 14.286875ms for fixHost
	I0318 04:35:50.415010   17425 start.go:83] releasing machines lock for "kubernetes-upgrade-311000", held for 14.305208ms
	W0318 04:35:50.415016   17425 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:35:50.415052   17425 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:35:50.415056   17425 start.go:728] Will try again in 5 seconds ...
	I0318 04:35:55.415295   17425 start.go:360] acquireMachinesLock for kubernetes-upgrade-311000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:35:55.415817   17425 start.go:364] duration metric: took 433.833µs to acquireMachinesLock for "kubernetes-upgrade-311000"
	I0318 04:35:55.416046   17425 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:35:55.416068   17425 fix.go:54] fixHost starting: 
	I0318 04:35:55.416766   17425 fix.go:112] recreateIfNeeded on kubernetes-upgrade-311000: state=Stopped err=<nil>
	W0318 04:35:55.416794   17425 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:35:55.422334   17425 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-311000" ...
	I0318 04:35:55.428357   17425 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kubernetes-upgrade-311000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kubernetes-upgrade-311000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kubernetes-upgrade-311000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:c7:06:be:f1:40 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kubernetes-upgrade-311000/disk.qcow2
	I0318 04:35:55.438944   17425 main.go:141] libmachine: STDOUT: 
	I0318 04:35:55.439002   17425 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:35:55.439094   17425 fix.go:56] duration metric: took 23.028584ms for fixHost
	I0318 04:35:55.439115   17425 start.go:83] releasing machines lock for "kubernetes-upgrade-311000", held for 23.2235ms
	W0318 04:35:55.439342   17425 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-311000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-311000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:35:55.449310   17425 out.go:177] 
	W0318 04:35:55.453356   17425 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:35:55.453404   17425 out.go:239] * 
	* 
	W0318 04:35:55.456057   17425 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:35:55.462237   17425 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-311000 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-311000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-311000 version --output=json: exit status 1 (64.593917ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-311000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-03-18 04:35:55.543214 -0700 PDT m=+1036.430930417
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-311000 -n kubernetes-upgrade-311000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-311000 -n kubernetes-upgrade-311000: exit status 7 (34.673417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-311000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-311000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-311000
--- FAIL: TestKubernetesUpgrade (18.77s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.29s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.32.0 on darwin (arm64)
- MINIKUBE_LOCATION=18429
- KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current2047733221/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.29s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.28s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.32.0 on darwin (arm64)
- MINIKUBE_LOCATION=18429
- KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current4023386294/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (581.57s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.494360578 start -p stopped-upgrade-126000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.494360578 start -p stopped-upgrade-126000 --memory=2200 --vm-driver=qemu2 : (45.846041792s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.494360578 -p stopped-upgrade-126000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.494360578 -p stopped-upgrade-126000 stop: (12.120390708s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-126000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-126000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m43.505760417s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-126000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-126000" primary control-plane node in "stopped-upgrade-126000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-126000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:36:59.712926   17465 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:36:59.713080   17465 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:36:59.713085   17465 out.go:304] Setting ErrFile to fd 2...
	I0318 04:36:59.713088   17465 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:36:59.713246   17465 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:36:59.714483   17465 out.go:298] Setting JSON to false
	I0318 04:36:59.734356   17465 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":9392,"bootTime":1710752427,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:36:59.734436   17465 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:36:59.739513   17465 out.go:177] * [stopped-upgrade-126000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:36:59.751626   17465 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 04:36:59.747589   17465 notify.go:220] Checking for updates...
	I0318 04:36:59.759471   17465 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:36:59.765014   17465 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:36:59.768522   17465 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:36:59.771554   17465 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	I0318 04:36:59.774523   17465 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:36:59.777834   17465 config.go:182] Loaded profile config "stopped-upgrade-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 04:36:59.782540   17465 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0318 04:36:59.785425   17465 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:36:59.789484   17465 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 04:36:59.795471   17465 start.go:297] selected driver: qemu2
	I0318 04:36:59.795477   17465 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-126000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53534 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-126000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0318 04:36:59.795534   17465 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:36:59.798309   17465 cni.go:84] Creating CNI manager for ""
	I0318 04:36:59.798329   17465 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:36:59.798369   17465 start.go:340] cluster config:
	{Name:stopped-upgrade-126000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53534 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-126000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0318 04:36:59.798441   17465 iso.go:125] acquiring lock: {Name:mkb8143674083e0c7a46a3ed751b3800392bcd24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:36:59.807515   17465 out.go:177] * Starting "stopped-upgrade-126000" primary control-plane node in "stopped-upgrade-126000" cluster
	I0318 04:36:59.811506   17465 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0318 04:36:59.811521   17465 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0318 04:36:59.811531   17465 cache.go:56] Caching tarball of preloaded images
	I0318 04:36:59.811584   17465 preload.go:173] Found /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:36:59.811590   17465 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0318 04:36:59.811643   17465 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/stopped-upgrade-126000/config.json ...
	I0318 04:36:59.812177   17465 start.go:360] acquireMachinesLock for stopped-upgrade-126000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:36:59.812213   17465 start.go:364] duration metric: took 29.791µs to acquireMachinesLock for "stopped-upgrade-126000"
	I0318 04:36:59.812224   17465 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:36:59.812229   17465 fix.go:54] fixHost starting: 
	I0318 04:36:59.812345   17465 fix.go:112] recreateIfNeeded on stopped-upgrade-126000: state=Stopped err=<nil>
	W0318 04:36:59.812354   17465 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:36:59.820509   17465 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-126000" ...
	I0318 04:36:59.824582   17465 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/stopped-upgrade-126000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/stopped-upgrade-126000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/stopped-upgrade-126000/qemu.pid -nic user,model=virtio,hostfwd=tcp::53501-:22,hostfwd=tcp::53502-:2376,hostname=stopped-upgrade-126000 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/stopped-upgrade-126000/disk.qcow2
	I0318 04:36:59.872693   17465 main.go:141] libmachine: STDOUT: 
	I0318 04:36:59.872723   17465 main.go:141] libmachine: STDERR: 
	I0318 04:36:59.872731   17465 main.go:141] libmachine: Waiting for VM to start (ssh -p 53501 docker@127.0.0.1)...
	I0318 04:37:20.214907   17465 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/stopped-upgrade-126000/config.json ...
	I0318 04:37:20.215739   17465 machine.go:94] provisionDockerMachine start ...
	I0318 04:37:20.215976   17465 main.go:141] libmachine: Using SSH client type: native
	I0318 04:37:20.216470   17465 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a71bf0] 0x102a74450 <nil>  [] 0s} localhost 53501 <nil> <nil>}
	I0318 04:37:20.216486   17465 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 04:37:20.293213   17465 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 04:37:20.293246   17465 buildroot.go:166] provisioning hostname "stopped-upgrade-126000"
	I0318 04:37:20.293439   17465 main.go:141] libmachine: Using SSH client type: native
	I0318 04:37:20.293695   17465 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a71bf0] 0x102a74450 <nil>  [] 0s} localhost 53501 <nil> <nil>}
	I0318 04:37:20.293706   17465 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-126000 && echo "stopped-upgrade-126000" | sudo tee /etc/hostname
	I0318 04:37:20.366969   17465 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-126000
	
	I0318 04:37:20.367100   17465 main.go:141] libmachine: Using SSH client type: native
	I0318 04:37:20.367274   17465 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a71bf0] 0x102a74450 <nil>  [] 0s} localhost 53501 <nil> <nil>}
	I0318 04:37:20.367289   17465 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-126000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-126000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-126000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 04:37:20.430960   17465 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 04:37:20.430975   17465 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18429-15072/.minikube CaCertPath:/Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18429-15072/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18429-15072/.minikube}
	I0318 04:37:20.430983   17465 buildroot.go:174] setting up certificates
	I0318 04:37:20.430995   17465 provision.go:84] configureAuth start
	I0318 04:37:20.431000   17465 provision.go:143] copyHostCerts
	I0318 04:37:20.431077   17465 exec_runner.go:144] found /Users/jenkins/minikube-integration/18429-15072/.minikube/ca.pem, removing ...
	I0318 04:37:20.431086   17465 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18429-15072/.minikube/ca.pem
	I0318 04:37:20.431195   17465 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18429-15072/.minikube/ca.pem (1082 bytes)
	I0318 04:37:20.431396   17465 exec_runner.go:144] found /Users/jenkins/minikube-integration/18429-15072/.minikube/cert.pem, removing ...
	I0318 04:37:20.431400   17465 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18429-15072/.minikube/cert.pem
	I0318 04:37:20.431457   17465 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18429-15072/.minikube/cert.pem (1123 bytes)
	I0318 04:37:20.431584   17465 exec_runner.go:144] found /Users/jenkins/minikube-integration/18429-15072/.minikube/key.pem, removing ...
	I0318 04:37:20.431588   17465 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18429-15072/.minikube/key.pem
	I0318 04:37:20.431643   17465 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18429-15072/.minikube/key.pem (1679 bytes)
	I0318 04:37:20.431756   17465 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-126000 san=[127.0.0.1 localhost minikube stopped-upgrade-126000]
	I0318 04:37:20.614100   17465 provision.go:177] copyRemoteCerts
	I0318 04:37:20.614145   17465 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 04:37:20.614155   17465 sshutil.go:53] new ssh client: &{IP:localhost Port:53501 SSHKeyPath:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/stopped-upgrade-126000/id_rsa Username:docker}
	I0318 04:37:20.646386   17465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0318 04:37:20.653178   17465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0318 04:37:20.660927   17465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 04:37:20.667912   17465 provision.go:87] duration metric: took 236.916041ms to configureAuth
	I0318 04:37:20.667921   17465 buildroot.go:189] setting minikube options for container-runtime
	I0318 04:37:20.668013   17465 config.go:182] Loaded profile config "stopped-upgrade-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 04:37:20.668049   17465 main.go:141] libmachine: Using SSH client type: native
	I0318 04:37:20.668143   17465 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a71bf0] 0x102a74450 <nil>  [] 0s} localhost 53501 <nil> <nil>}
	I0318 04:37:20.668148   17465 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0318 04:37:20.722073   17465 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0318 04:37:20.722082   17465 buildroot.go:70] root file system type: tmpfs
	I0318 04:37:20.722135   17465 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0318 04:37:20.722174   17465 main.go:141] libmachine: Using SSH client type: native
	I0318 04:37:20.722277   17465 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a71bf0] 0x102a74450 <nil>  [] 0s} localhost 53501 <nil> <nil>}
	I0318 04:37:20.722309   17465 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0318 04:37:20.781393   17465 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0318 04:37:20.781453   17465 main.go:141] libmachine: Using SSH client type: native
	I0318 04:37:20.781620   17465 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a71bf0] 0x102a74450 <nil>  [] 0s} localhost 53501 <nil> <nil>}
	I0318 04:37:20.781628   17465 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0318 04:37:21.144902   17465 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0318 04:37:21.144917   17465 machine.go:97] duration metric: took 929.192708ms to provisionDockerMachine
	I0318 04:37:21.144923   17465 start.go:293] postStartSetup for "stopped-upgrade-126000" (driver="qemu2")
	I0318 04:37:21.144930   17465 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 04:37:21.144997   17465 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 04:37:21.145006   17465 sshutil.go:53] new ssh client: &{IP:localhost Port:53501 SSHKeyPath:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/stopped-upgrade-126000/id_rsa Username:docker}
	I0318 04:37:21.174361   17465 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 04:37:21.175659   17465 info.go:137] Remote host: Buildroot 2021.02.12
	I0318 04:37:21.175667   17465 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18429-15072/.minikube/addons for local assets ...
	I0318 04:37:21.175738   17465 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18429-15072/.minikube/files for local assets ...
	I0318 04:37:21.175851   17465 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18429-15072/.minikube/files/etc/ssl/certs/154812.pem -> 154812.pem in /etc/ssl/certs
	I0318 04:37:21.175979   17465 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 04:37:21.178558   17465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/files/etc/ssl/certs/154812.pem --> /etc/ssl/certs/154812.pem (1708 bytes)
	I0318 04:37:21.185530   17465 start.go:296] duration metric: took 40.603042ms for postStartSetup
	I0318 04:37:21.185544   17465 fix.go:56] duration metric: took 21.37402925s for fixHost
	I0318 04:37:21.185580   17465 main.go:141] libmachine: Using SSH client type: native
	I0318 04:37:21.185677   17465 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102a71bf0] 0x102a74450 <nil>  [] 0s} localhost 53501 <nil> <nil>}
	I0318 04:37:21.185682   17465 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0318 04:37:21.239639   17465 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710761841.298580463
	
	I0318 04:37:21.239650   17465 fix.go:216] guest clock: 1710761841.298580463
	I0318 04:37:21.239654   17465 fix.go:229] Guest: 2024-03-18 04:37:21.298580463 -0700 PDT Remote: 2024-03-18 04:37:21.185546 -0700 PDT m=+21.507085043 (delta=113.034463ms)
	I0318 04:37:21.239665   17465 fix.go:200] guest clock delta is within tolerance: 113.034463ms
	I0318 04:37:21.239668   17465 start.go:83] releasing machines lock for "stopped-upgrade-126000", held for 21.428164666s
	I0318 04:37:21.239739   17465 ssh_runner.go:195] Run: cat /version.json
	I0318 04:37:21.239751   17465 sshutil.go:53] new ssh client: &{IP:localhost Port:53501 SSHKeyPath:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/stopped-upgrade-126000/id_rsa Username:docker}
	I0318 04:37:21.239739   17465 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 04:37:21.239782   17465 sshutil.go:53] new ssh client: &{IP:localhost Port:53501 SSHKeyPath:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/stopped-upgrade-126000/id_rsa Username:docker}
	W0318 04:37:21.240414   17465 sshutil.go:64] dial failure (will retry): dial tcp [::1]:53501: connect: connection refused
	I0318 04:37:21.240440   17465 retry.go:31] will retry after 333.505157ms: dial tcp [::1]:53501: connect: connection refused
	W0318 04:37:21.267535   17465 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0318 04:37:21.267586   17465 ssh_runner.go:195] Run: systemctl --version
	I0318 04:37:21.269257   17465 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 04:37:21.270887   17465 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 04:37:21.270915   17465 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0318 04:37:21.273655   17465 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0318 04:37:21.278454   17465 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 04:37:21.278462   17465 start.go:494] detecting cgroup driver to use...
	I0318 04:37:21.278541   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 04:37:21.284584   17465 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0318 04:37:21.287954   17465 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0318 04:37:21.290693   17465 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0318 04:37:21.290714   17465 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0318 04:37:21.293671   17465 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 04:37:21.297050   17465 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0318 04:37:21.300306   17465 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0318 04:37:21.303135   17465 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 04:37:21.306001   17465 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0318 04:37:21.309253   17465 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 04:37:21.312232   17465 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 04:37:21.314948   17465 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 04:37:21.389860   17465 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0318 04:37:21.399182   17465 start.go:494] detecting cgroup driver to use...
	I0318 04:37:21.399248   17465 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0318 04:37:21.404783   17465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 04:37:21.409566   17465 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 04:37:21.416402   17465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 04:37:21.420772   17465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 04:37:21.425054   17465 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0318 04:37:21.487086   17465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0318 04:37:21.492270   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 04:37:21.498056   17465 ssh_runner.go:195] Run: which cri-dockerd
	I0318 04:37:21.499420   17465 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0318 04:37:21.502218   17465 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0318 04:37:21.507186   17465 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0318 04:37:21.593020   17465 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0318 04:37:21.655283   17465 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0318 04:37:21.655434   17465 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0318 04:37:21.661804   17465 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 04:37:21.812270   17465 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 04:37:22.926258   17465 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.114009458s)
	I0318 04:37:22.926333   17465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0318 04:37:22.930815   17465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 04:37:22.935584   17465 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0318 04:37:23.011102   17465 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0318 04:37:23.081664   17465 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 04:37:23.159223   17465 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0318 04:37:23.165062   17465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0318 04:37:23.169852   17465 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 04:37:23.247275   17465 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0318 04:37:23.287056   17465 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0318 04:37:23.287145   17465 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0318 04:37:23.290567   17465 start.go:562] Will wait 60s for crictl version
	I0318 04:37:23.290621   17465 ssh_runner.go:195] Run: which crictl
	I0318 04:37:23.291898   17465 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 04:37:23.306926   17465 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0318 04:37:23.307015   17465 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 04:37:23.323847   17465 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0318 04:37:23.344262   17465 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0318 04:37:23.344340   17465 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0318 04:37:23.345603   17465 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 04:37:23.349481   17465 kubeadm.go:877] updating cluster {Name:stopped-upgrade-126000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53534 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-126000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0318 04:37:23.349524   17465 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0318 04:37:23.349568   17465 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 04:37:23.359942   17465 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0318 04:37:23.359957   17465 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0318 04:37:23.360005   17465 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0318 04:37:23.362828   17465 ssh_runner.go:195] Run: which lz4
	I0318 04:37:23.363990   17465 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0318 04:37:23.365199   17465 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 04:37:23.365207   17465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0318 04:37:24.098665   17465 docker.go:649] duration metric: took 734.727084ms to copy over tarball
	I0318 04:37:24.098738   17465 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 04:37:25.272747   17465 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.174024958s)
	I0318 04:37:25.272769   17465 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 04:37:25.288847   17465 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0318 04:37:25.292395   17465 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0318 04:37:25.297477   17465 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 04:37:25.374093   17465 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0318 04:37:26.903609   17465 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.529549458s)
	I0318 04:37:26.903719   17465 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0318 04:37:26.914375   17465 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0318 04:37:26.914387   17465 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0318 04:37:26.914392   17465 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 04:37:26.922788   17465 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0318 04:37:26.922856   17465 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0318 04:37:26.922956   17465 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 04:37:26.923009   17465 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 04:37:26.923067   17465 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0318 04:37:26.923107   17465 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 04:37:26.923173   17465 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0318 04:37:26.923310   17465 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0318 04:37:26.932001   17465 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 04:37:26.932096   17465 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0318 04:37:26.932117   17465 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0318 04:37:26.932159   17465 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 04:37:26.932375   17465 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0318 04:37:26.932622   17465 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0318 04:37:26.932753   17465 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0318 04:37:26.932623   17465 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	W0318 04:37:28.903872   17465 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0318 04:37:28.904370   17465 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0318 04:37:28.934757   17465 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0318 04:37:28.934800   17465 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 04:37:28.934901   17465 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0318 04:37:28.953519   17465 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0318 04:37:28.953658   17465 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0318 04:37:28.956135   17465 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0318 04:37:28.956153   17465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0318 04:37:28.991740   17465 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0318 04:37:28.994623   17465 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0318 04:37:28.994633   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0318 04:37:29.002936   17465 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0318 04:37:29.002955   17465 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0318 04:37:29.003006   17465 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0318 04:37:29.034339   17465 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 04:37:29.041410   17465 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0318 04:37:29.046398   17465 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0318 04:37:29.058471   17465 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0318 04:37:29.058562   17465 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0318 04:37:29.058636   17465 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0318 04:37:29.058657   17465 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 04:37:29.058702   17465 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0318 04:37:29.061505   17465 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0318 04:37:29.066598   17465 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0318 04:37:29.066618   17465 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0318 04:37:29.066674   17465 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0318 04:37:29.067290   17465 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0318 04:37:29.067299   17465 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0318 04:37:29.067320   17465 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0318 04:37:29.074654   17465 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0318 04:37:29.088426   17465 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0318 04:37:29.098014   17465 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0318 04:37:29.098022   17465 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0318 04:37:29.098034   17465 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0318 04:37:29.098066   17465 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0318 04:37:29.098078   17465 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0318 04:37:29.098118   17465 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0318 04:37:29.102634   17465 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0318 04:37:29.102651   17465 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0318 04:37:29.102703   17465 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0318 04:37:29.103688   17465 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0318 04:37:29.103705   17465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0318 04:37:29.110065   17465 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0318 04:37:29.115573   17465 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0318 04:37:29.117219   17465 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0318 04:37:29.117228   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0318 04:37:29.143237   17465 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	W0318 04:37:29.475553   17465 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0318 04:37:29.476171   17465 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 04:37:29.514530   17465 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0318 04:37:29.514586   17465 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 04:37:29.514693   17465 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 04:37:29.541310   17465 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0318 04:37:29.541466   17465 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0318 04:37:29.543795   17465 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0318 04:37:29.543830   17465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0318 04:37:29.573515   17465 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0318 04:37:29.573534   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0318 04:37:29.816377   17465 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0318 04:37:29.816417   17465 cache_images.go:92] duration metric: took 2.902115s to LoadCachedImages
	W0318 04:37:29.816454   17465 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0318 04:37:29.816461   17465 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0318 04:37:29.816515   17465 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-126000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-126000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 04:37:29.816589   17465 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0318 04:37:29.830224   17465 cni.go:84] Creating CNI manager for ""
	I0318 04:37:29.830236   17465 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:37:29.830240   17465 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 04:37:29.830248   17465 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-126000 NodeName:stopped-upgrade-126000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 04:37:29.830314   17465 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-126000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 04:37:29.830362   17465 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0318 04:37:29.833490   17465 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 04:37:29.833515   17465 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 04:37:29.836579   17465 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0318 04:37:29.841605   17465 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 04:37:29.846586   17465 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0318 04:37:29.851844   17465 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0318 04:37:29.853119   17465 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 04:37:29.856848   17465 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 04:37:29.934762   17465 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 04:37:29.946873   17465 certs.go:68] Setting up /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/stopped-upgrade-126000 for IP: 10.0.2.15
	I0318 04:37:29.946885   17465 certs.go:194] generating shared ca certs ...
	I0318 04:37:29.946894   17465 certs.go:226] acquiring lock for ca certs: {Name:mk30e64e6a2f5ccd376efb026974022e10fa3463 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:37:29.947064   17465 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18429-15072/.minikube/ca.key
	I0318 04:37:29.947112   17465 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18429-15072/.minikube/proxy-client-ca.key
	I0318 04:37:29.947118   17465 certs.go:256] generating profile certs ...
	I0318 04:37:29.947192   17465 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/stopped-upgrade-126000/client.key
	I0318 04:37:29.947210   17465 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/stopped-upgrade-126000/apiserver.key.d0815522
	I0318 04:37:29.947220   17465 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/stopped-upgrade-126000/apiserver.crt.d0815522 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0318 04:37:30.029798   17465 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/stopped-upgrade-126000/apiserver.crt.d0815522 ...
	I0318 04:37:30.029813   17465 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/stopped-upgrade-126000/apiserver.crt.d0815522: {Name:mk847418b6cee3fea3538d3f49f23aaf8cc83511 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:37:30.030102   17465 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/stopped-upgrade-126000/apiserver.key.d0815522 ...
	I0318 04:37:30.030109   17465 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/stopped-upgrade-126000/apiserver.key.d0815522: {Name:mk9618f09b3b800abe737fa4c492492ed007f7b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:37:30.030240   17465 certs.go:381] copying /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/stopped-upgrade-126000/apiserver.crt.d0815522 -> /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/stopped-upgrade-126000/apiserver.crt
	I0318 04:37:30.030375   17465 certs.go:385] copying /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/stopped-upgrade-126000/apiserver.key.d0815522 -> /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/stopped-upgrade-126000/apiserver.key
	I0318 04:37:30.030515   17465 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/stopped-upgrade-126000/proxy-client.key
	I0318 04:37:30.030641   17465 certs.go:484] found cert: /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/15481.pem (1338 bytes)
	W0318 04:37:30.030670   17465 certs.go:480] ignoring /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/15481_empty.pem, impossibly tiny 0 bytes
	I0318 04:37:30.030675   17465 certs.go:484] found cert: /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca-key.pem (1679 bytes)
	I0318 04:37:30.030691   17465 certs.go:484] found cert: /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem (1082 bytes)
	I0318 04:37:30.030706   17465 certs.go:484] found cert: /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem (1123 bytes)
	I0318 04:37:30.030722   17465 certs.go:484] found cert: /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/key.pem (1679 bytes)
	I0318 04:37:30.030759   17465 certs.go:484] found cert: /Users/jenkins/minikube-integration/18429-15072/.minikube/files/etc/ssl/certs/154812.pem (1708 bytes)
	I0318 04:37:30.031063   17465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 04:37:30.037879   17465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0318 04:37:30.044875   17465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 04:37:30.052235   17465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 04:37:30.059129   17465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/stopped-upgrade-126000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0318 04:37:30.065622   17465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/stopped-upgrade-126000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 04:37:30.072942   17465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/stopped-upgrade-126000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 04:37:30.080402   17465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/stopped-upgrade-126000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 04:37:30.087706   17465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/files/etc/ssl/certs/154812.pem --> /usr/share/ca-certificates/154812.pem (1708 bytes)
	I0318 04:37:30.094457   17465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 04:37:30.101256   17465 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/15481.pem --> /usr/share/ca-certificates/15481.pem (1338 bytes)
	I0318 04:37:30.108567   17465 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 04:37:30.113850   17465 ssh_runner.go:195] Run: openssl version
	I0318 04:37:30.115812   17465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/154812.pem && ln -fs /usr/share/ca-certificates/154812.pem /etc/ssl/certs/154812.pem"
	I0318 04:37:30.118729   17465 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/154812.pem
	I0318 04:37:30.120156   17465 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 11:20 /usr/share/ca-certificates/154812.pem
	I0318 04:37:30.120176   17465 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/154812.pem
	I0318 04:37:30.122022   17465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/154812.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 04:37:30.125348   17465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 04:37:30.128673   17465 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 04:37:30.130273   17465 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 11:33 /usr/share/ca-certificates/minikubeCA.pem
	I0318 04:37:30.130291   17465 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 04:37:30.131985   17465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 04:37:30.134819   17465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15481.pem && ln -fs /usr/share/ca-certificates/15481.pem /etc/ssl/certs/15481.pem"
	I0318 04:37:30.137671   17465 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15481.pem
	I0318 04:37:30.139291   17465 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 11:20 /usr/share/ca-certificates/15481.pem
	I0318 04:37:30.139315   17465 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15481.pem
	I0318 04:37:30.141018   17465 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15481.pem /etc/ssl/certs/51391683.0"
	I0318 04:37:30.144476   17465 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 04:37:30.146062   17465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 04:37:30.148491   17465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 04:37:30.150639   17465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 04:37:30.152557   17465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 04:37:30.154357   17465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 04:37:30.156177   17465 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 04:37:30.158092   17465 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-126000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:53534 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-126000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0318 04:37:30.158167   17465 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0318 04:37:30.169084   17465 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 04:37:30.172272   17465 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 04:37:30.172280   17465 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 04:37:30.172282   17465 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 04:37:30.172309   17465 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 04:37:30.175188   17465 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 04:37:30.175464   17465 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-126000" does not appear in /Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:37:30.175565   17465 kubeconfig.go:62] /Users/jenkins/minikube-integration/18429-15072/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-126000" cluster setting kubeconfig missing "stopped-upgrade-126000" context setting]
	I0318 04:37:30.175748   17465 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18429-15072/kubeconfig: {Name:mkeb86e27ccdf30a065b43661cfe2af2dc198b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:37:30.176161   17465 kapi.go:59] client config for stopped-upgrade-126000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/stopped-upgrade-126000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/stopped-upgrade-126000/client.key", CAFile:"/Users/jenkins/minikube-integration/18429-15072/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103d62a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 04:37:30.176467   17465 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 04:37:30.179094   17465 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-126000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0318 04:37:30.179100   17465 kubeadm.go:1154] stopping kube-system containers ...
	I0318 04:37:30.179141   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0318 04:37:30.189510   17465 docker.go:483] Stopping containers: [7dacaac7f891 9b8ffa5f8458 d579e22e148e 8b75879fc7bf fb25a67bf414 03620a2d9297 a64bfd63de1d eee08746d061]
	I0318 04:37:30.189593   17465 ssh_runner.go:195] Run: docker stop 7dacaac7f891 9b8ffa5f8458 d579e22e148e 8b75879fc7bf fb25a67bf414 03620a2d9297 a64bfd63de1d eee08746d061
	I0318 04:37:30.200301   17465 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 04:37:30.206149   17465 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 04:37:30.209308   17465 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 04:37:30.209314   17465 kubeadm.go:156] found existing configuration files:
	
	I0318 04:37:30.209347   17465 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53534 /etc/kubernetes/admin.conf
	I0318 04:37:30.212181   17465 kubeadm.go:162] "https://control-plane.minikube.internal:53534" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53534 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 04:37:30.212203   17465 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 04:37:30.214747   17465 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53534 /etc/kubernetes/kubelet.conf
	I0318 04:37:30.217353   17465 kubeadm.go:162] "https://control-plane.minikube.internal:53534" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53534 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 04:37:30.217375   17465 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 04:37:30.220342   17465 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53534 /etc/kubernetes/controller-manager.conf
	I0318 04:37:30.222939   17465 kubeadm.go:162] "https://control-plane.minikube.internal:53534" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53534 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 04:37:30.222958   17465 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 04:37:30.225583   17465 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53534 /etc/kubernetes/scheduler.conf
	I0318 04:37:30.228387   17465 kubeadm.go:162] "https://control-plane.minikube.internal:53534" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53534 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 04:37:30.228408   17465 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 04:37:30.231077   17465 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 04:37:30.233721   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 04:37:30.255583   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 04:37:30.673176   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 04:37:30.806427   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 04:37:30.829387   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 04:37:30.851740   17465 api_server.go:52] waiting for apiserver process to appear ...
	I0318 04:37:30.851827   17465 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 04:37:31.353951   17465 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 04:37:31.853650   17465 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 04:37:31.858129   17465 api_server.go:72] duration metric: took 1.006424s to wait for apiserver process to appear ...
	I0318 04:37:31.858139   17465 api_server.go:88] waiting for apiserver healthz status ...
	I0318 04:37:31.858153   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:37:36.860111   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:37:36.860141   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:37:41.860201   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:37:41.860230   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:37:46.860746   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:37:46.860778   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:37:51.861167   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:37:51.861232   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:37:56.861901   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:37:56.861972   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:38:01.862981   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:38:01.863032   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:38:06.863830   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:38:06.863889   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:38:11.864549   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:38:11.864647   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:38:16.866866   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:38:16.866912   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:38:21.868994   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:38:21.869076   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:38:26.871210   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:38:26.871233   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:38:31.871821   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:38:31.871935   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:38:31.884850   17465 logs.go:276] 2 containers: [32dc048c4476 7dacaac7f891]
	I0318 04:38:31.884926   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:38:31.895440   17465 logs.go:276] 2 containers: [8c963566b500 8b75879fc7bf]
	I0318 04:38:31.895518   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:38:31.905552   17465 logs.go:276] 1 containers: [b66f543335d1]
	I0318 04:38:31.905634   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:38:31.916042   17465 logs.go:276] 2 containers: [e8b1c1b2cd19 d579e22e148e]
	I0318 04:38:31.916114   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:38:31.926937   17465 logs.go:276] 1 containers: [372e2774400e]
	I0318 04:38:31.927010   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:38:31.943917   17465 logs.go:276] 2 containers: [0be8d71b93fd fb25a67bf414]
	I0318 04:38:31.943999   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:38:31.954367   17465 logs.go:276] 0 containers: []
	W0318 04:38:31.954378   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:38:31.954441   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:38:31.965358   17465 logs.go:276] 2 containers: [81ae28fc685b 9e2192628e0f]
	I0318 04:38:31.965378   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:38:31.965384   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:38:31.969950   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:38:31.969960   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:38:32.084431   17465 logs.go:123] Gathering logs for kube-apiserver [7dacaac7f891] ...
	I0318 04:38:32.084445   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dacaac7f891"
	I0318 04:38:32.125163   17465 logs.go:123] Gathering logs for kube-scheduler [e8b1c1b2cd19] ...
	I0318 04:38:32.125179   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8b1c1b2cd19"
	I0318 04:38:32.137388   17465 logs.go:123] Gathering logs for kube-controller-manager [fb25a67bf414] ...
	I0318 04:38:32.137398   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb25a67bf414"
	I0318 04:38:32.150034   17465 logs.go:123] Gathering logs for storage-provisioner [81ae28fc685b] ...
	I0318 04:38:32.150047   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ae28fc685b"
	I0318 04:38:32.162200   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:38:32.162212   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:38:32.175259   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:38:32.175276   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:38:32.212513   17465 logs.go:123] Gathering logs for kube-apiserver [32dc048c4476] ...
	I0318 04:38:32.212526   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dc048c4476"
	I0318 04:38:32.226855   17465 logs.go:123] Gathering logs for etcd [8b75879fc7bf] ...
	I0318 04:38:32.226867   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b75879fc7bf"
	I0318 04:38:32.241525   17465 logs.go:123] Gathering logs for coredns [b66f543335d1] ...
	I0318 04:38:32.241535   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b66f543335d1"
	I0318 04:38:32.253227   17465 logs.go:123] Gathering logs for kube-proxy [372e2774400e] ...
	I0318 04:38:32.253239   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 372e2774400e"
	I0318 04:38:32.265619   17465 logs.go:123] Gathering logs for kube-controller-manager [0be8d71b93fd] ...
	I0318 04:38:32.265628   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0be8d71b93fd"
	I0318 04:38:32.283360   17465 logs.go:123] Gathering logs for storage-provisioner [9e2192628e0f] ...
	I0318 04:38:32.283370   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e2192628e0f"
	I0318 04:38:32.294919   17465 logs.go:123] Gathering logs for etcd [8c963566b500] ...
	I0318 04:38:32.294939   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c963566b500"
	I0318 04:38:32.309204   17465 logs.go:123] Gathering logs for kube-scheduler [d579e22e148e] ...
	I0318 04:38:32.309216   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d579e22e148e"
	I0318 04:38:32.325002   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:38:32.325012   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:38:34.852520   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:38:39.854773   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:38:39.855253   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:38:39.895623   17465 logs.go:276] 2 containers: [32dc048c4476 7dacaac7f891]
	I0318 04:38:39.895773   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:38:39.916781   17465 logs.go:276] 2 containers: [8c963566b500 8b75879fc7bf]
	I0318 04:38:39.916910   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:38:39.931610   17465 logs.go:276] 1 containers: [b66f543335d1]
	I0318 04:38:39.931703   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:38:39.944865   17465 logs.go:276] 2 containers: [e8b1c1b2cd19 d579e22e148e]
	I0318 04:38:39.944956   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:38:39.960346   17465 logs.go:276] 1 containers: [372e2774400e]
	I0318 04:38:39.960425   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:38:39.971783   17465 logs.go:276] 2 containers: [0be8d71b93fd fb25a67bf414]
	I0318 04:38:39.971867   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:38:39.986806   17465 logs.go:276] 0 containers: []
	W0318 04:38:39.986818   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:38:39.986882   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:38:39.997161   17465 logs.go:276] 2 containers: [81ae28fc685b 9e2192628e0f]
	I0318 04:38:39.997181   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:38:39.997187   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:38:40.032353   17465 logs.go:123] Gathering logs for kube-apiserver [7dacaac7f891] ...
	I0318 04:38:40.032366   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dacaac7f891"
	I0318 04:38:40.072187   17465 logs.go:123] Gathering logs for kube-scheduler [e8b1c1b2cd19] ...
	I0318 04:38:40.072200   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8b1c1b2cd19"
	I0318 04:38:40.084310   17465 logs.go:123] Gathering logs for kube-proxy [372e2774400e] ...
	I0318 04:38:40.084323   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 372e2774400e"
	I0318 04:38:40.095944   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:38:40.095955   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:38:40.100240   17465 logs.go:123] Gathering logs for kube-scheduler [d579e22e148e] ...
	I0318 04:38:40.100246   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d579e22e148e"
	I0318 04:38:40.115392   17465 logs.go:123] Gathering logs for kube-controller-manager [0be8d71b93fd] ...
	I0318 04:38:40.115403   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0be8d71b93fd"
	I0318 04:38:40.133954   17465 logs.go:123] Gathering logs for kube-controller-manager [fb25a67bf414] ...
	I0318 04:38:40.133964   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb25a67bf414"
	I0318 04:38:40.146194   17465 logs.go:123] Gathering logs for etcd [8c963566b500] ...
	I0318 04:38:40.146205   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c963566b500"
	I0318 04:38:40.160556   17465 logs.go:123] Gathering logs for etcd [8b75879fc7bf] ...
	I0318 04:38:40.160566   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b75879fc7bf"
	I0318 04:38:40.174919   17465 logs.go:123] Gathering logs for storage-provisioner [81ae28fc685b] ...
	I0318 04:38:40.174929   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ae28fc685b"
	I0318 04:38:40.186178   17465 logs.go:123] Gathering logs for storage-provisioner [9e2192628e0f] ...
	I0318 04:38:40.186190   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e2192628e0f"
	I0318 04:38:40.197676   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:38:40.197691   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:38:40.209748   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:38:40.209764   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:38:40.248506   17465 logs.go:123] Gathering logs for kube-apiserver [32dc048c4476] ...
	I0318 04:38:40.248514   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dc048c4476"
	I0318 04:38:40.262526   17465 logs.go:123] Gathering logs for coredns [b66f543335d1] ...
	I0318 04:38:40.262537   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b66f543335d1"
	I0318 04:38:40.273792   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:38:40.273801   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:38:42.800542   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:38:47.802803   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:38:47.803155   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:38:47.843477   17465 logs.go:276] 2 containers: [32dc048c4476 7dacaac7f891]
	I0318 04:38:47.843618   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:38:47.864138   17465 logs.go:276] 2 containers: [8c963566b500 8b75879fc7bf]
	I0318 04:38:47.864225   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:38:47.879045   17465 logs.go:276] 1 containers: [b66f543335d1]
	I0318 04:38:47.879122   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:38:47.891595   17465 logs.go:276] 2 containers: [e8b1c1b2cd19 d579e22e148e]
	I0318 04:38:47.891684   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:38:47.902132   17465 logs.go:276] 1 containers: [372e2774400e]
	I0318 04:38:47.902199   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:38:47.912951   17465 logs.go:276] 2 containers: [0be8d71b93fd fb25a67bf414]
	I0318 04:38:47.913013   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:38:47.923114   17465 logs.go:276] 0 containers: []
	W0318 04:38:47.923123   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:38:47.923174   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:38:47.937845   17465 logs.go:276] 2 containers: [81ae28fc685b 9e2192628e0f]
	I0318 04:38:47.937865   17465 logs.go:123] Gathering logs for etcd [8b75879fc7bf] ...
	I0318 04:38:47.937872   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b75879fc7bf"
	I0318 04:38:47.952257   17465 logs.go:123] Gathering logs for storage-provisioner [81ae28fc685b] ...
	I0318 04:38:47.952268   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ae28fc685b"
	I0318 04:38:47.963898   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:38:47.963909   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:38:47.988868   17465 logs.go:123] Gathering logs for kube-apiserver [7dacaac7f891] ...
	I0318 04:38:47.988877   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dacaac7f891"
	I0318 04:38:48.026252   17465 logs.go:123] Gathering logs for etcd [8c963566b500] ...
	I0318 04:38:48.026263   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c963566b500"
	I0318 04:38:48.039866   17465 logs.go:123] Gathering logs for kube-proxy [372e2774400e] ...
	I0318 04:38:48.039876   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 372e2774400e"
	I0318 04:38:48.052017   17465 logs.go:123] Gathering logs for storage-provisioner [9e2192628e0f] ...
	I0318 04:38:48.052028   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e2192628e0f"
	I0318 04:38:48.063548   17465 logs.go:123] Gathering logs for kube-controller-manager [0be8d71b93fd] ...
	I0318 04:38:48.063566   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0be8d71b93fd"
	I0318 04:38:48.081243   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:38:48.081254   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:38:48.120127   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:38:48.120146   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:38:48.124592   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:38:48.124602   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:38:48.160946   17465 logs.go:123] Gathering logs for kube-scheduler [e8b1c1b2cd19] ...
	I0318 04:38:48.160958   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8b1c1b2cd19"
	I0318 04:38:48.173661   17465 logs.go:123] Gathering logs for kube-scheduler [d579e22e148e] ...
	I0318 04:38:48.173672   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d579e22e148e"
	I0318 04:38:48.188570   17465 logs.go:123] Gathering logs for kube-apiserver [32dc048c4476] ...
	I0318 04:38:48.188580   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dc048c4476"
	I0318 04:38:48.202059   17465 logs.go:123] Gathering logs for coredns [b66f543335d1] ...
	I0318 04:38:48.202069   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b66f543335d1"
	I0318 04:38:48.212939   17465 logs.go:123] Gathering logs for kube-controller-manager [fb25a67bf414] ...
	I0318 04:38:48.212953   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb25a67bf414"
	I0318 04:38:48.224903   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:38:48.224917   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:38:50.738937   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:38:55.741477   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:38:55.741674   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:38:55.760073   17465 logs.go:276] 2 containers: [32dc048c4476 7dacaac7f891]
	I0318 04:38:55.760179   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:38:55.773988   17465 logs.go:276] 2 containers: [8c963566b500 8b75879fc7bf]
	I0318 04:38:55.774067   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:38:55.785524   17465 logs.go:276] 1 containers: [b66f543335d1]
	I0318 04:38:55.785597   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:38:55.795854   17465 logs.go:276] 2 containers: [e8b1c1b2cd19 d579e22e148e]
	I0318 04:38:55.795944   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:38:55.805782   17465 logs.go:276] 1 containers: [372e2774400e]
	I0318 04:38:55.805849   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:38:55.816435   17465 logs.go:276] 2 containers: [0be8d71b93fd fb25a67bf414]
	I0318 04:38:55.816523   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:38:55.827052   17465 logs.go:276] 0 containers: []
	W0318 04:38:55.827067   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:38:55.827131   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:38:55.837195   17465 logs.go:276] 2 containers: [81ae28fc685b 9e2192628e0f]
	I0318 04:38:55.837213   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:38:55.837219   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:38:55.874726   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:38:55.874740   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:38:55.887110   17465 logs.go:123] Gathering logs for kube-scheduler [e8b1c1b2cd19] ...
	I0318 04:38:55.887125   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8b1c1b2cd19"
	I0318 04:38:55.902767   17465 logs.go:123] Gathering logs for kube-scheduler [d579e22e148e] ...
	I0318 04:38:55.902780   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d579e22e148e"
	I0318 04:38:55.917608   17465 logs.go:123] Gathering logs for storage-provisioner [9e2192628e0f] ...
	I0318 04:38:55.917617   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e2192628e0f"
	I0318 04:38:55.929019   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:38:55.929030   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:38:55.954297   17465 logs.go:123] Gathering logs for kube-apiserver [7dacaac7f891] ...
	I0318 04:38:55.954320   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dacaac7f891"
	I0318 04:38:55.995047   17465 logs.go:123] Gathering logs for etcd [8b75879fc7bf] ...
	I0318 04:38:55.995058   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b75879fc7bf"
	I0318 04:38:56.009954   17465 logs.go:123] Gathering logs for kube-apiserver [32dc048c4476] ...
	I0318 04:38:56.009969   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dc048c4476"
	I0318 04:38:56.023840   17465 logs.go:123] Gathering logs for etcd [8c963566b500] ...
	I0318 04:38:56.023850   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c963566b500"
	I0318 04:38:56.038244   17465 logs.go:123] Gathering logs for coredns [b66f543335d1] ...
	I0318 04:38:56.038259   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b66f543335d1"
	I0318 04:38:56.050309   17465 logs.go:123] Gathering logs for kube-controller-manager [0be8d71b93fd] ...
	I0318 04:38:56.050321   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0be8d71b93fd"
	I0318 04:38:56.068053   17465 logs.go:123] Gathering logs for storage-provisioner [81ae28fc685b] ...
	I0318 04:38:56.068065   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ae28fc685b"
	I0318 04:38:56.080053   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:38:56.080063   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:38:56.117791   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:38:56.117802   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:38:56.121959   17465 logs.go:123] Gathering logs for kube-proxy [372e2774400e] ...
	I0318 04:38:56.121966   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 372e2774400e"
	I0318 04:38:56.133713   17465 logs.go:123] Gathering logs for kube-controller-manager [fb25a67bf414] ...
	I0318 04:38:56.133727   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb25a67bf414"
	I0318 04:38:58.647882   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:39:03.649089   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:39:03.649302   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:39:03.671318   17465 logs.go:276] 2 containers: [32dc048c4476 7dacaac7f891]
	I0318 04:39:03.671409   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:39:03.683689   17465 logs.go:276] 2 containers: [8c963566b500 8b75879fc7bf]
	I0318 04:39:03.683776   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:39:03.697846   17465 logs.go:276] 1 containers: [b66f543335d1]
	I0318 04:39:03.697913   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:39:03.708142   17465 logs.go:276] 2 containers: [e8b1c1b2cd19 d579e22e148e]
	I0318 04:39:03.708218   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:39:03.718631   17465 logs.go:276] 1 containers: [372e2774400e]
	I0318 04:39:03.718698   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:39:03.729300   17465 logs.go:276] 2 containers: [0be8d71b93fd fb25a67bf414]
	I0318 04:39:03.729368   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:39:03.739114   17465 logs.go:276] 0 containers: []
	W0318 04:39:03.739126   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:39:03.739183   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:39:03.749422   17465 logs.go:276] 2 containers: [81ae28fc685b 9e2192628e0f]
	I0318 04:39:03.749440   17465 logs.go:123] Gathering logs for kube-proxy [372e2774400e] ...
	I0318 04:39:03.749448   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 372e2774400e"
	I0318 04:39:03.761519   17465 logs.go:123] Gathering logs for kube-controller-manager [0be8d71b93fd] ...
	I0318 04:39:03.761530   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0be8d71b93fd"
	I0318 04:39:03.778709   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:39:03.778719   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:39:03.790749   17465 logs.go:123] Gathering logs for etcd [8b75879fc7bf] ...
	I0318 04:39:03.790762   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b75879fc7bf"
	I0318 04:39:03.805123   17465 logs.go:123] Gathering logs for kube-scheduler [e8b1c1b2cd19] ...
	I0318 04:39:03.805134   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8b1c1b2cd19"
	I0318 04:39:03.816851   17465 logs.go:123] Gathering logs for kube-apiserver [32dc048c4476] ...
	I0318 04:39:03.816861   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dc048c4476"
	I0318 04:39:03.830767   17465 logs.go:123] Gathering logs for kube-scheduler [d579e22e148e] ...
	I0318 04:39:03.830776   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d579e22e148e"
	I0318 04:39:03.846050   17465 logs.go:123] Gathering logs for storage-provisioner [9e2192628e0f] ...
	I0318 04:39:03.846061   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e2192628e0f"
	I0318 04:39:03.857647   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:39:03.857657   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:39:03.882769   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:39:03.882777   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:39:03.887002   17465 logs.go:123] Gathering logs for storage-provisioner [81ae28fc685b] ...
	I0318 04:39:03.887008   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ae28fc685b"
	I0318 04:39:03.898699   17465 logs.go:123] Gathering logs for kube-apiserver [7dacaac7f891] ...
	I0318 04:39:03.898709   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dacaac7f891"
	I0318 04:39:03.935578   17465 logs.go:123] Gathering logs for etcd [8c963566b500] ...
	I0318 04:39:03.935589   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c963566b500"
	I0318 04:39:03.949298   17465 logs.go:123] Gathering logs for coredns [b66f543335d1] ...
	I0318 04:39:03.949308   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b66f543335d1"
	I0318 04:39:03.960688   17465 logs.go:123] Gathering logs for kube-controller-manager [fb25a67bf414] ...
	I0318 04:39:03.960699   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb25a67bf414"
	I0318 04:39:03.973483   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:39:03.973496   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:39:04.012104   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:39:04.012113   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:39:06.550169   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:39:11.551313   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:39:11.551481   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:39:11.570865   17465 logs.go:276] 2 containers: [32dc048c4476 7dacaac7f891]
	I0318 04:39:11.570967   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:39:11.584298   17465 logs.go:276] 2 containers: [8c963566b500 8b75879fc7bf]
	I0318 04:39:11.584373   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:39:11.595453   17465 logs.go:276] 1 containers: [b66f543335d1]
	I0318 04:39:11.595527   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:39:11.606342   17465 logs.go:276] 2 containers: [e8b1c1b2cd19 d579e22e148e]
	I0318 04:39:11.606414   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:39:11.616834   17465 logs.go:276] 1 containers: [372e2774400e]
	I0318 04:39:11.616911   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:39:11.628434   17465 logs.go:276] 2 containers: [0be8d71b93fd fb25a67bf414]
	I0318 04:39:11.628505   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:39:11.638616   17465 logs.go:276] 0 containers: []
	W0318 04:39:11.638625   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:39:11.638682   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:39:11.648976   17465 logs.go:276] 2 containers: [81ae28fc685b 9e2192628e0f]
	I0318 04:39:11.648993   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:39:11.648997   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:39:11.685876   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:39:11.685887   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:39:11.689845   17465 logs.go:123] Gathering logs for coredns [b66f543335d1] ...
	I0318 04:39:11.689853   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b66f543335d1"
	I0318 04:39:11.700562   17465 logs.go:123] Gathering logs for kube-controller-manager [0be8d71b93fd] ...
	I0318 04:39:11.700573   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0be8d71b93fd"
	I0318 04:39:11.717738   17465 logs.go:123] Gathering logs for storage-provisioner [9e2192628e0f] ...
	I0318 04:39:11.717754   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e2192628e0f"
	I0318 04:39:11.728831   17465 logs.go:123] Gathering logs for etcd [8b75879fc7bf] ...
	I0318 04:39:11.728843   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b75879fc7bf"
	I0318 04:39:11.743824   17465 logs.go:123] Gathering logs for kube-scheduler [e8b1c1b2cd19] ...
	I0318 04:39:11.743834   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8b1c1b2cd19"
	I0318 04:39:11.755658   17465 logs.go:123] Gathering logs for kube-scheduler [d579e22e148e] ...
	I0318 04:39:11.755668   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d579e22e148e"
	I0318 04:39:11.771197   17465 logs.go:123] Gathering logs for kube-controller-manager [fb25a67bf414] ...
	I0318 04:39:11.771212   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb25a67bf414"
	I0318 04:39:11.783369   17465 logs.go:123] Gathering logs for storage-provisioner [81ae28fc685b] ...
	I0318 04:39:11.783383   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ae28fc685b"
	I0318 04:39:11.794652   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:39:11.794665   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:39:11.807200   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:39:11.807210   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:39:11.842761   17465 logs.go:123] Gathering logs for kube-apiserver [32dc048c4476] ...
	I0318 04:39:11.842773   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dc048c4476"
	I0318 04:39:11.864962   17465 logs.go:123] Gathering logs for kube-apiserver [7dacaac7f891] ...
	I0318 04:39:11.864974   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dacaac7f891"
	I0318 04:39:11.902372   17465 logs.go:123] Gathering logs for etcd [8c963566b500] ...
	I0318 04:39:11.902384   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c963566b500"
	I0318 04:39:11.916696   17465 logs.go:123] Gathering logs for kube-proxy [372e2774400e] ...
	I0318 04:39:11.916707   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 372e2774400e"
	I0318 04:39:11.932703   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:39:11.932719   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:39:14.458173   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:39:19.460282   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:39:19.460453   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:39:19.475172   17465 logs.go:276] 2 containers: [32dc048c4476 7dacaac7f891]
	I0318 04:39:19.475268   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:39:19.487061   17465 logs.go:276] 2 containers: [8c963566b500 8b75879fc7bf]
	I0318 04:39:19.487135   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:39:19.497666   17465 logs.go:276] 1 containers: [b66f543335d1]
	I0318 04:39:19.497739   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:39:19.508296   17465 logs.go:276] 2 containers: [e8b1c1b2cd19 d579e22e148e]
	I0318 04:39:19.508375   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:39:19.518706   17465 logs.go:276] 1 containers: [372e2774400e]
	I0318 04:39:19.518778   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:39:19.529436   17465 logs.go:276] 2 containers: [0be8d71b93fd fb25a67bf414]
	I0318 04:39:19.529504   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:39:19.549331   17465 logs.go:276] 0 containers: []
	W0318 04:39:19.549343   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:39:19.549407   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:39:19.561322   17465 logs.go:276] 2 containers: [81ae28fc685b 9e2192628e0f]
	I0318 04:39:19.561338   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:39:19.561344   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:39:19.596609   17465 logs.go:123] Gathering logs for kube-scheduler [e8b1c1b2cd19] ...
	I0318 04:39:19.596624   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8b1c1b2cd19"
	I0318 04:39:19.610550   17465 logs.go:123] Gathering logs for etcd [8c963566b500] ...
	I0318 04:39:19.610562   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c963566b500"
	I0318 04:39:19.624489   17465 logs.go:123] Gathering logs for etcd [8b75879fc7bf] ...
	I0318 04:39:19.624501   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b75879fc7bf"
	I0318 04:39:19.639603   17465 logs.go:123] Gathering logs for kube-controller-manager [0be8d71b93fd] ...
	I0318 04:39:19.639615   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0be8d71b93fd"
	I0318 04:39:19.657353   17465 logs.go:123] Gathering logs for storage-provisioner [81ae28fc685b] ...
	I0318 04:39:19.657364   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ae28fc685b"
	I0318 04:39:19.668928   17465 logs.go:123] Gathering logs for storage-provisioner [9e2192628e0f] ...
	I0318 04:39:19.668938   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e2192628e0f"
	I0318 04:39:19.680302   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:39:19.680312   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:39:19.684401   17465 logs.go:123] Gathering logs for kube-apiserver [32dc048c4476] ...
	I0318 04:39:19.684408   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dc048c4476"
	I0318 04:39:19.701785   17465 logs.go:123] Gathering logs for kube-apiserver [7dacaac7f891] ...
	I0318 04:39:19.701795   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dacaac7f891"
	I0318 04:39:19.739906   17465 logs.go:123] Gathering logs for kube-proxy [372e2774400e] ...
	I0318 04:39:19.739917   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 372e2774400e"
	I0318 04:39:19.752306   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:39:19.752319   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:39:19.788915   17465 logs.go:123] Gathering logs for coredns [b66f543335d1] ...
	I0318 04:39:19.788923   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b66f543335d1"
	I0318 04:39:19.802085   17465 logs.go:123] Gathering logs for kube-scheduler [d579e22e148e] ...
	I0318 04:39:19.802096   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d579e22e148e"
	I0318 04:39:19.816633   17465 logs.go:123] Gathering logs for kube-controller-manager [fb25a67bf414] ...
	I0318 04:39:19.816644   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb25a67bf414"
	I0318 04:39:19.828248   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:39:19.828258   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:39:19.852941   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:39:19.852949   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:39:22.367597   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:39:27.369734   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:39:27.369857   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:39:27.382884   17465 logs.go:276] 2 containers: [32dc048c4476 7dacaac7f891]
	I0318 04:39:27.382966   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:39:27.393456   17465 logs.go:276] 2 containers: [8c963566b500 8b75879fc7bf]
	I0318 04:39:27.393530   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:39:27.403454   17465 logs.go:276] 1 containers: [b66f543335d1]
	I0318 04:39:27.403526   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:39:27.414001   17465 logs.go:276] 2 containers: [e8b1c1b2cd19 d579e22e148e]
	I0318 04:39:27.414071   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:39:27.424261   17465 logs.go:276] 1 containers: [372e2774400e]
	I0318 04:39:27.424327   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:39:27.440663   17465 logs.go:276] 2 containers: [0be8d71b93fd fb25a67bf414]
	I0318 04:39:27.440734   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:39:27.450937   17465 logs.go:276] 0 containers: []
	W0318 04:39:27.450956   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:39:27.451020   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:39:27.466540   17465 logs.go:276] 2 containers: [81ae28fc685b 9e2192628e0f]
	I0318 04:39:27.466560   17465 logs.go:123] Gathering logs for kube-apiserver [7dacaac7f891] ...
	I0318 04:39:27.466565   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dacaac7f891"
	I0318 04:39:27.503474   17465 logs.go:123] Gathering logs for kube-controller-manager [0be8d71b93fd] ...
	I0318 04:39:27.503485   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0be8d71b93fd"
	I0318 04:39:27.521206   17465 logs.go:123] Gathering logs for kube-scheduler [d579e22e148e] ...
	I0318 04:39:27.521214   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d579e22e148e"
	I0318 04:39:27.536923   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:39:27.536936   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:39:27.549369   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:39:27.549379   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:39:27.553488   17465 logs.go:123] Gathering logs for kube-apiserver [32dc048c4476] ...
	I0318 04:39:27.553501   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dc048c4476"
	I0318 04:39:27.568491   17465 logs.go:123] Gathering logs for etcd [8c963566b500] ...
	I0318 04:39:27.568503   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c963566b500"
	I0318 04:39:27.582158   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:39:27.582170   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:39:27.619666   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:39:27.619677   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:39:27.656374   17465 logs.go:123] Gathering logs for kube-controller-manager [fb25a67bf414] ...
	I0318 04:39:27.656385   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb25a67bf414"
	I0318 04:39:27.672436   17465 logs.go:123] Gathering logs for kube-proxy [372e2774400e] ...
	I0318 04:39:27.672448   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 372e2774400e"
	I0318 04:39:27.683613   17465 logs.go:123] Gathering logs for storage-provisioner [81ae28fc685b] ...
	I0318 04:39:27.683624   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ae28fc685b"
	I0318 04:39:27.695476   17465 logs.go:123] Gathering logs for storage-provisioner [9e2192628e0f] ...
	I0318 04:39:27.695488   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e2192628e0f"
	I0318 04:39:27.707169   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:39:27.707182   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:39:27.731404   17465 logs.go:123] Gathering logs for etcd [8b75879fc7bf] ...
	I0318 04:39:27.731413   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b75879fc7bf"
	I0318 04:39:27.745856   17465 logs.go:123] Gathering logs for coredns [b66f543335d1] ...
	I0318 04:39:27.745867   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b66f543335d1"
	I0318 04:39:27.757258   17465 logs.go:123] Gathering logs for kube-scheduler [e8b1c1b2cd19] ...
	I0318 04:39:27.757269   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8b1c1b2cd19"
	I0318 04:39:30.269837   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:39:35.272113   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:39:35.272306   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:39:35.285228   17465 logs.go:276] 2 containers: [32dc048c4476 7dacaac7f891]
	I0318 04:39:35.285303   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:39:35.295713   17465 logs.go:276] 2 containers: [8c963566b500 8b75879fc7bf]
	I0318 04:39:35.295788   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:39:35.306133   17465 logs.go:276] 1 containers: [b66f543335d1]
	I0318 04:39:35.306210   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:39:35.316702   17465 logs.go:276] 2 containers: [e8b1c1b2cd19 d579e22e148e]
	I0318 04:39:35.316768   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:39:35.327094   17465 logs.go:276] 1 containers: [372e2774400e]
	I0318 04:39:35.327159   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:39:35.337923   17465 logs.go:276] 2 containers: [0be8d71b93fd fb25a67bf414]
	I0318 04:39:35.337997   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:39:35.348206   17465 logs.go:276] 0 containers: []
	W0318 04:39:35.348219   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:39:35.348278   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:39:35.359073   17465 logs.go:276] 2 containers: [81ae28fc685b 9e2192628e0f]
	I0318 04:39:35.359089   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:39:35.359095   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:39:35.396681   17465 logs.go:123] Gathering logs for kube-scheduler [d579e22e148e] ...
	I0318 04:39:35.396694   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d579e22e148e"
	I0318 04:39:35.411560   17465 logs.go:123] Gathering logs for kube-proxy [372e2774400e] ...
	I0318 04:39:35.411571   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 372e2774400e"
	I0318 04:39:35.422963   17465 logs.go:123] Gathering logs for kube-controller-manager [fb25a67bf414] ...
	I0318 04:39:35.422975   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb25a67bf414"
	I0318 04:39:35.434798   17465 logs.go:123] Gathering logs for storage-provisioner [81ae28fc685b] ...
	I0318 04:39:35.434809   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ae28fc685b"
	I0318 04:39:35.446887   17465 logs.go:123] Gathering logs for kube-apiserver [7dacaac7f891] ...
	I0318 04:39:35.446898   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dacaac7f891"
	I0318 04:39:35.484181   17465 logs.go:123] Gathering logs for etcd [8b75879fc7bf] ...
	I0318 04:39:35.484195   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b75879fc7bf"
	I0318 04:39:35.498358   17465 logs.go:123] Gathering logs for coredns [b66f543335d1] ...
	I0318 04:39:35.498372   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b66f543335d1"
	I0318 04:39:35.509528   17465 logs.go:123] Gathering logs for storage-provisioner [9e2192628e0f] ...
	I0318 04:39:35.509540   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e2192628e0f"
	I0318 04:39:35.525531   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:39:35.525542   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:39:35.537555   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:39:35.537569   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:39:35.579150   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:39:35.579164   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:39:35.604306   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:39:35.604317   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:39:35.608432   17465 logs.go:123] Gathering logs for kube-apiserver [32dc048c4476] ...
	I0318 04:39:35.608440   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dc048c4476"
	I0318 04:39:35.626678   17465 logs.go:123] Gathering logs for etcd [8c963566b500] ...
	I0318 04:39:35.626693   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c963566b500"
	I0318 04:39:35.640899   17465 logs.go:123] Gathering logs for kube-scheduler [e8b1c1b2cd19] ...
	I0318 04:39:35.640910   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8b1c1b2cd19"
	I0318 04:39:35.659958   17465 logs.go:123] Gathering logs for kube-controller-manager [0be8d71b93fd] ...
	I0318 04:39:35.659968   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0be8d71b93fd"
	I0318 04:39:38.182936   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:39:43.185249   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:39:43.185439   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:39:43.203136   17465 logs.go:276] 2 containers: [32dc048c4476 7dacaac7f891]
	I0318 04:39:43.203218   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:39:43.215614   17465 logs.go:276] 2 containers: [8c963566b500 8b75879fc7bf]
	I0318 04:39:43.215690   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:39:43.226937   17465 logs.go:276] 1 containers: [b66f543335d1]
	I0318 04:39:43.227007   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:39:43.240990   17465 logs.go:276] 2 containers: [e8b1c1b2cd19 d579e22e148e]
	I0318 04:39:43.241060   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:39:43.251810   17465 logs.go:276] 1 containers: [372e2774400e]
	I0318 04:39:43.251876   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:39:43.262757   17465 logs.go:276] 2 containers: [0be8d71b93fd fb25a67bf414]
	I0318 04:39:43.262829   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:39:43.273918   17465 logs.go:276] 0 containers: []
	W0318 04:39:43.273932   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:39:43.273999   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:39:43.286288   17465 logs.go:276] 2 containers: [81ae28fc685b 9e2192628e0f]
	I0318 04:39:43.286309   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:39:43.286314   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:39:43.323893   17465 logs.go:123] Gathering logs for kube-scheduler [e8b1c1b2cd19] ...
	I0318 04:39:43.323904   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8b1c1b2cd19"
	I0318 04:39:43.336354   17465 logs.go:123] Gathering logs for kube-proxy [372e2774400e] ...
	I0318 04:39:43.336365   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 372e2774400e"
	I0318 04:39:43.348502   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:39:43.348516   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:39:43.360522   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:39:43.360533   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:39:43.397769   17465 logs.go:123] Gathering logs for etcd [8b75879fc7bf] ...
	I0318 04:39:43.397781   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b75879fc7bf"
	I0318 04:39:43.412598   17465 logs.go:123] Gathering logs for storage-provisioner [9e2192628e0f] ...
	I0318 04:39:43.412608   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e2192628e0f"
	I0318 04:39:43.424563   17465 logs.go:123] Gathering logs for kube-controller-manager [fb25a67bf414] ...
	I0318 04:39:43.424575   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb25a67bf414"
	I0318 04:39:43.436292   17465 logs.go:123] Gathering logs for storage-provisioner [81ae28fc685b] ...
	I0318 04:39:43.436304   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ae28fc685b"
	I0318 04:39:43.448963   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:39:43.448977   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:39:43.473348   17465 logs.go:123] Gathering logs for kube-apiserver [32dc048c4476] ...
	I0318 04:39:43.473357   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dc048c4476"
	I0318 04:39:43.487562   17465 logs.go:123] Gathering logs for kube-apiserver [7dacaac7f891] ...
	I0318 04:39:43.487572   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dacaac7f891"
	I0318 04:39:43.529542   17465 logs.go:123] Gathering logs for etcd [8c963566b500] ...
	I0318 04:39:43.529556   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c963566b500"
	I0318 04:39:43.544022   17465 logs.go:123] Gathering logs for coredns [b66f543335d1] ...
	I0318 04:39:43.544047   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b66f543335d1"
	I0318 04:39:43.559265   17465 logs.go:123] Gathering logs for kube-controller-manager [0be8d71b93fd] ...
	I0318 04:39:43.559276   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0be8d71b93fd"
	I0318 04:39:43.577321   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:39:43.577330   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:39:43.581538   17465 logs.go:123] Gathering logs for kube-scheduler [d579e22e148e] ...
	I0318 04:39:43.581548   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d579e22e148e"
	I0318 04:39:46.096172   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:39:51.098337   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:39:51.098493   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:39:51.115252   17465 logs.go:276] 2 containers: [32dc048c4476 7dacaac7f891]
	I0318 04:39:51.115324   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:39:51.126410   17465 logs.go:276] 2 containers: [8c963566b500 8b75879fc7bf]
	I0318 04:39:51.126486   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:39:51.138756   17465 logs.go:276] 1 containers: [b66f543335d1]
	I0318 04:39:51.138823   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:39:51.149260   17465 logs.go:276] 2 containers: [e8b1c1b2cd19 d579e22e148e]
	I0318 04:39:51.149329   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:39:51.159232   17465 logs.go:276] 1 containers: [372e2774400e]
	I0318 04:39:51.159324   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:39:51.170721   17465 logs.go:276] 2 containers: [0be8d71b93fd fb25a67bf414]
	I0318 04:39:51.170800   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:39:51.180721   17465 logs.go:276] 0 containers: []
	W0318 04:39:51.180733   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:39:51.180795   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:39:51.196628   17465 logs.go:276] 2 containers: [81ae28fc685b 9e2192628e0f]
	I0318 04:39:51.196647   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:39:51.196652   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:39:51.200698   17465 logs.go:123] Gathering logs for storage-provisioner [81ae28fc685b] ...
	I0318 04:39:51.200708   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ae28fc685b"
	I0318 04:39:51.211992   17465 logs.go:123] Gathering logs for coredns [b66f543335d1] ...
	I0318 04:39:51.212007   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b66f543335d1"
	I0318 04:39:51.223619   17465 logs.go:123] Gathering logs for kube-scheduler [d579e22e148e] ...
	I0318 04:39:51.223630   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d579e22e148e"
	I0318 04:39:51.240464   17465 logs.go:123] Gathering logs for kube-apiserver [7dacaac7f891] ...
	I0318 04:39:51.240474   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dacaac7f891"
	I0318 04:39:51.277309   17465 logs.go:123] Gathering logs for etcd [8b75879fc7bf] ...
	I0318 04:39:51.277321   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b75879fc7bf"
	I0318 04:39:51.291757   17465 logs.go:123] Gathering logs for kube-scheduler [e8b1c1b2cd19] ...
	I0318 04:39:51.291769   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8b1c1b2cd19"
	I0318 04:39:51.305129   17465 logs.go:123] Gathering logs for kube-controller-manager [0be8d71b93fd] ...
	I0318 04:39:51.305139   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0be8d71b93fd"
	I0318 04:39:51.322767   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:39:51.322776   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:39:51.358778   17465 logs.go:123] Gathering logs for kube-apiserver [32dc048c4476] ...
	I0318 04:39:51.358793   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dc048c4476"
	I0318 04:39:51.379374   17465 logs.go:123] Gathering logs for kube-proxy [372e2774400e] ...
	I0318 04:39:51.379389   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 372e2774400e"
	I0318 04:39:51.395136   17465 logs.go:123] Gathering logs for kube-controller-manager [fb25a67bf414] ...
	I0318 04:39:51.395146   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb25a67bf414"
	I0318 04:39:51.407413   17465 logs.go:123] Gathering logs for storage-provisioner [9e2192628e0f] ...
	I0318 04:39:51.407427   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e2192628e0f"
	I0318 04:39:51.419998   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:39:51.420007   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:39:51.442734   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:39:51.442742   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:39:51.454532   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:39:51.454546   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:39:51.491175   17465 logs.go:123] Gathering logs for etcd [8c963566b500] ...
	I0318 04:39:51.491185   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c963566b500"
	I0318 04:39:54.006841   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:39:59.008982   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:39:59.009131   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:39:59.021325   17465 logs.go:276] 2 containers: [32dc048c4476 7dacaac7f891]
	I0318 04:39:59.021400   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:39:59.033687   17465 logs.go:276] 2 containers: [8c963566b500 8b75879fc7bf]
	I0318 04:39:59.033756   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:39:59.044231   17465 logs.go:276] 1 containers: [b66f543335d1]
	I0318 04:39:59.044303   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:39:59.055000   17465 logs.go:276] 2 containers: [e8b1c1b2cd19 d579e22e148e]
	I0318 04:39:59.055075   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:39:59.065498   17465 logs.go:276] 1 containers: [372e2774400e]
	I0318 04:39:59.065567   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:39:59.076232   17465 logs.go:276] 2 containers: [0be8d71b93fd fb25a67bf414]
	I0318 04:39:59.076302   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:39:59.086769   17465 logs.go:276] 0 containers: []
	W0318 04:39:59.086780   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:39:59.086839   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:39:59.097564   17465 logs.go:276] 2 containers: [81ae28fc685b 9e2192628e0f]
	I0318 04:39:59.097583   17465 logs.go:123] Gathering logs for kube-scheduler [e8b1c1b2cd19] ...
	I0318 04:39:59.097588   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8b1c1b2cd19"
	I0318 04:39:59.109704   17465 logs.go:123] Gathering logs for kube-controller-manager [fb25a67bf414] ...
	I0318 04:39:59.109717   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb25a67bf414"
	I0318 04:39:59.122009   17465 logs.go:123] Gathering logs for coredns [b66f543335d1] ...
	I0318 04:39:59.122020   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b66f543335d1"
	I0318 04:39:59.134980   17465 logs.go:123] Gathering logs for kube-proxy [372e2774400e] ...
	I0318 04:39:59.134991   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 372e2774400e"
	I0318 04:39:59.146729   17465 logs.go:123] Gathering logs for kube-controller-manager [0be8d71b93fd] ...
	I0318 04:39:59.146740   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0be8d71b93fd"
	I0318 04:39:59.174307   17465 logs.go:123] Gathering logs for storage-provisioner [81ae28fc685b] ...
	I0318 04:39:59.174317   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ae28fc685b"
	I0318 04:39:59.185631   17465 logs.go:123] Gathering logs for storage-provisioner [9e2192628e0f] ...
	I0318 04:39:59.185641   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e2192628e0f"
	I0318 04:39:59.196880   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:39:59.196891   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:39:59.208386   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:39:59.208399   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:39:59.212431   17465 logs.go:123] Gathering logs for kube-apiserver [32dc048c4476] ...
	I0318 04:39:59.212442   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dc048c4476"
	I0318 04:39:59.226127   17465 logs.go:123] Gathering logs for kube-apiserver [7dacaac7f891] ...
	I0318 04:39:59.226137   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dacaac7f891"
	I0318 04:39:59.263280   17465 logs.go:123] Gathering logs for etcd [8b75879fc7bf] ...
	I0318 04:39:59.263295   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b75879fc7bf"
	I0318 04:39:59.277757   17465 logs.go:123] Gathering logs for kube-scheduler [d579e22e148e] ...
	I0318 04:39:59.277767   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d579e22e148e"
	I0318 04:39:59.292493   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:39:59.292503   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:39:59.328968   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:39:59.328977   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:39:59.367128   17465 logs.go:123] Gathering logs for etcd [8c963566b500] ...
	I0318 04:39:59.367140   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c963566b500"
	I0318 04:39:59.385153   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:39:59.385168   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:40:01.910622   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:40:06.912781   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:40:06.913064   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:40:06.937019   17465 logs.go:276] 2 containers: [32dc048c4476 7dacaac7f891]
	I0318 04:40:06.937116   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:40:06.951678   17465 logs.go:276] 2 containers: [8c963566b500 8b75879fc7bf]
	I0318 04:40:06.951765   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:40:06.963661   17465 logs.go:276] 1 containers: [b66f543335d1]
	I0318 04:40:06.963733   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:40:06.974819   17465 logs.go:276] 2 containers: [e8b1c1b2cd19 d579e22e148e]
	I0318 04:40:06.974883   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:40:06.985310   17465 logs.go:276] 1 containers: [372e2774400e]
	I0318 04:40:06.985383   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:40:06.995916   17465 logs.go:276] 2 containers: [0be8d71b93fd fb25a67bf414]
	I0318 04:40:06.995980   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:40:07.006395   17465 logs.go:276] 0 containers: []
	W0318 04:40:07.006406   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:40:07.006458   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:40:07.020942   17465 logs.go:276] 2 containers: [81ae28fc685b 9e2192628e0f]
	I0318 04:40:07.020963   17465 logs.go:123] Gathering logs for storage-provisioner [81ae28fc685b] ...
	I0318 04:40:07.020970   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ae28fc685b"
	I0318 04:40:07.032296   17465 logs.go:123] Gathering logs for storage-provisioner [9e2192628e0f] ...
	I0318 04:40:07.032308   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e2192628e0f"
	I0318 04:40:07.046063   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:40:07.046077   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:40:07.060341   17465 logs.go:123] Gathering logs for kube-apiserver [32dc048c4476] ...
	I0318 04:40:07.060352   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dc048c4476"
	I0318 04:40:07.074459   17465 logs.go:123] Gathering logs for kube-apiserver [7dacaac7f891] ...
	I0318 04:40:07.074469   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dacaac7f891"
	I0318 04:40:07.112179   17465 logs.go:123] Gathering logs for kube-controller-manager [fb25a67bf414] ...
	I0318 04:40:07.112194   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb25a67bf414"
	I0318 04:40:07.124398   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:40:07.124411   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:40:07.147584   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:40:07.147594   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:40:07.183885   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:40:07.183894   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:40:07.187745   17465 logs.go:123] Gathering logs for kube-scheduler [e8b1c1b2cd19] ...
	I0318 04:40:07.187754   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8b1c1b2cd19"
	I0318 04:40:07.199173   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:40:07.199187   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:40:07.236936   17465 logs.go:123] Gathering logs for coredns [b66f543335d1] ...
	I0318 04:40:07.236946   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b66f543335d1"
	I0318 04:40:07.248537   17465 logs.go:123] Gathering logs for kube-proxy [372e2774400e] ...
	I0318 04:40:07.248549   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 372e2774400e"
	I0318 04:40:07.260163   17465 logs.go:123] Gathering logs for kube-controller-manager [0be8d71b93fd] ...
	I0318 04:40:07.260173   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0be8d71b93fd"
	I0318 04:40:07.283437   17465 logs.go:123] Gathering logs for etcd [8c963566b500] ...
	I0318 04:40:07.283446   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c963566b500"
	I0318 04:40:07.297712   17465 logs.go:123] Gathering logs for etcd [8b75879fc7bf] ...
	I0318 04:40:07.297722   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b75879fc7bf"
	I0318 04:40:07.312506   17465 logs.go:123] Gathering logs for kube-scheduler [d579e22e148e] ...
	I0318 04:40:07.312517   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d579e22e148e"
	I0318 04:40:09.829401   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:40:14.829651   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:40:14.829828   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:40:14.847239   17465 logs.go:276] 2 containers: [32dc048c4476 7dacaac7f891]
	I0318 04:40:14.847328   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:40:14.860184   17465 logs.go:276] 2 containers: [8c963566b500 8b75879fc7bf]
	I0318 04:40:14.860255   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:40:14.871428   17465 logs.go:276] 1 containers: [b66f543335d1]
	I0318 04:40:14.871494   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:40:14.881741   17465 logs.go:276] 2 containers: [e8b1c1b2cd19 d579e22e148e]
	I0318 04:40:14.881820   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:40:14.892462   17465 logs.go:276] 1 containers: [372e2774400e]
	I0318 04:40:14.892534   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:40:14.904570   17465 logs.go:276] 2 containers: [0be8d71b93fd fb25a67bf414]
	I0318 04:40:14.904639   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:40:14.914839   17465 logs.go:276] 0 containers: []
	W0318 04:40:14.914849   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:40:14.914903   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:40:14.932533   17465 logs.go:276] 2 containers: [81ae28fc685b 9e2192628e0f]
	I0318 04:40:14.932552   17465 logs.go:123] Gathering logs for kube-scheduler [d579e22e148e] ...
	I0318 04:40:14.932557   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d579e22e148e"
	I0318 04:40:14.951923   17465 logs.go:123] Gathering logs for kube-controller-manager [0be8d71b93fd] ...
	I0318 04:40:14.951933   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0be8d71b93fd"
	I0318 04:40:14.969899   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:40:14.969911   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:40:14.982310   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:40:14.982325   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:40:15.015567   17465 logs.go:123] Gathering logs for kube-scheduler [e8b1c1b2cd19] ...
	I0318 04:40:15.015582   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8b1c1b2cd19"
	I0318 04:40:15.027819   17465 logs.go:123] Gathering logs for etcd [8c963566b500] ...
	I0318 04:40:15.027829   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c963566b500"
	I0318 04:40:15.041656   17465 logs.go:123] Gathering logs for etcd [8b75879fc7bf] ...
	I0318 04:40:15.041666   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b75879fc7bf"
	I0318 04:40:15.056214   17465 logs.go:123] Gathering logs for coredns [b66f543335d1] ...
	I0318 04:40:15.056225   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b66f543335d1"
	I0318 04:40:15.069192   17465 logs.go:123] Gathering logs for kube-proxy [372e2774400e] ...
	I0318 04:40:15.069206   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 372e2774400e"
	I0318 04:40:15.090553   17465 logs.go:123] Gathering logs for storage-provisioner [81ae28fc685b] ...
	I0318 04:40:15.090563   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ae28fc685b"
	I0318 04:40:15.106878   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:40:15.106890   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:40:15.111282   17465 logs.go:123] Gathering logs for kube-apiserver [32dc048c4476] ...
	I0318 04:40:15.111290   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dc048c4476"
	I0318 04:40:15.126367   17465 logs.go:123] Gathering logs for kube-controller-manager [fb25a67bf414] ...
	I0318 04:40:15.126377   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb25a67bf414"
	I0318 04:40:15.138556   17465 logs.go:123] Gathering logs for storage-provisioner [9e2192628e0f] ...
	I0318 04:40:15.138567   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e2192628e0f"
	I0318 04:40:15.150234   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:40:15.150246   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:40:15.173262   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:40:15.173270   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:40:15.209522   17465 logs.go:123] Gathering logs for kube-apiserver [7dacaac7f891] ...
	I0318 04:40:15.209529   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dacaac7f891"
	I0318 04:40:17.749774   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:40:22.751529   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:40:22.751758   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:40:22.774091   17465 logs.go:276] 2 containers: [32dc048c4476 7dacaac7f891]
	I0318 04:40:22.774193   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:40:22.788718   17465 logs.go:276] 2 containers: [8c963566b500 8b75879fc7bf]
	I0318 04:40:22.788798   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:40:22.801016   17465 logs.go:276] 1 containers: [b66f543335d1]
	I0318 04:40:22.801091   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:40:22.812106   17465 logs.go:276] 2 containers: [e8b1c1b2cd19 d579e22e148e]
	I0318 04:40:22.812177   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:40:22.823784   17465 logs.go:276] 1 containers: [372e2774400e]
	I0318 04:40:22.823858   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:40:22.836262   17465 logs.go:276] 2 containers: [0be8d71b93fd fb25a67bf414]
	I0318 04:40:22.836334   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:40:22.848197   17465 logs.go:276] 0 containers: []
	W0318 04:40:22.848208   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:40:22.848265   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:40:22.859911   17465 logs.go:276] 2 containers: [81ae28fc685b 9e2192628e0f]
	I0318 04:40:22.859929   17465 logs.go:123] Gathering logs for etcd [8b75879fc7bf] ...
	I0318 04:40:22.859934   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b75879fc7bf"
	I0318 04:40:22.874021   17465 logs.go:123] Gathering logs for kube-scheduler [d579e22e148e] ...
	I0318 04:40:22.874032   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d579e22e148e"
	I0318 04:40:22.891334   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:40:22.891344   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:40:22.915017   17465 logs.go:123] Gathering logs for kube-apiserver [32dc048c4476] ...
	I0318 04:40:22.915026   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dc048c4476"
	I0318 04:40:22.928917   17465 logs.go:123] Gathering logs for etcd [8c963566b500] ...
	I0318 04:40:22.928927   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c963566b500"
	I0318 04:40:22.943336   17465 logs.go:123] Gathering logs for kube-controller-manager [0be8d71b93fd] ...
	I0318 04:40:22.943346   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0be8d71b93fd"
	I0318 04:40:22.961829   17465 logs.go:123] Gathering logs for coredns [b66f543335d1] ...
	I0318 04:40:22.961839   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b66f543335d1"
	I0318 04:40:22.978740   17465 logs.go:123] Gathering logs for kube-proxy [372e2774400e] ...
	I0318 04:40:22.978750   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 372e2774400e"
	I0318 04:40:22.990429   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:40:22.990441   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:40:23.025207   17465 logs.go:123] Gathering logs for kube-controller-manager [fb25a67bf414] ...
	I0318 04:40:23.025217   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb25a67bf414"
	I0318 04:40:23.036736   17465 logs.go:123] Gathering logs for kube-apiserver [7dacaac7f891] ...
	I0318 04:40:23.036748   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dacaac7f891"
	I0318 04:40:23.075334   17465 logs.go:123] Gathering logs for kube-scheduler [e8b1c1b2cd19] ...
	I0318 04:40:23.075345   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8b1c1b2cd19"
	I0318 04:40:23.087399   17465 logs.go:123] Gathering logs for storage-provisioner [81ae28fc685b] ...
	I0318 04:40:23.087411   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ae28fc685b"
	I0318 04:40:23.103068   17465 logs.go:123] Gathering logs for storage-provisioner [9e2192628e0f] ...
	I0318 04:40:23.103078   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e2192628e0f"
	I0318 04:40:23.115087   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:40:23.115099   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:40:23.126910   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:40:23.126923   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:40:23.167041   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:40:23.167055   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:40:25.673367   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:40:30.675548   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:40:30.675807   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:40:30.701153   17465 logs.go:276] 2 containers: [32dc048c4476 7dacaac7f891]
	I0318 04:40:30.701285   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:40:30.718522   17465 logs.go:276] 2 containers: [8c963566b500 8b75879fc7bf]
	I0318 04:40:30.718612   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:40:30.732450   17465 logs.go:276] 1 containers: [b66f543335d1]
	I0318 04:40:30.732531   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:40:30.745457   17465 logs.go:276] 2 containers: [e8b1c1b2cd19 d579e22e148e]
	I0318 04:40:30.745529   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:40:30.764110   17465 logs.go:276] 1 containers: [372e2774400e]
	I0318 04:40:30.764183   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:40:30.775963   17465 logs.go:276] 2 containers: [0be8d71b93fd fb25a67bf414]
	I0318 04:40:30.776034   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:40:30.790856   17465 logs.go:276] 0 containers: []
	W0318 04:40:30.790868   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:40:30.790926   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:40:30.810237   17465 logs.go:276] 2 containers: [81ae28fc685b 9e2192628e0f]
	I0318 04:40:30.810255   17465 logs.go:123] Gathering logs for kube-apiserver [32dc048c4476] ...
	I0318 04:40:30.810261   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dc048c4476"
	I0318 04:40:30.825297   17465 logs.go:123] Gathering logs for coredns [b66f543335d1] ...
	I0318 04:40:30.825307   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b66f543335d1"
	I0318 04:40:30.836966   17465 logs.go:123] Gathering logs for kube-scheduler [e8b1c1b2cd19] ...
	I0318 04:40:30.836978   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8b1c1b2cd19"
	I0318 04:40:30.849043   17465 logs.go:123] Gathering logs for storage-provisioner [81ae28fc685b] ...
	I0318 04:40:30.849054   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ae28fc685b"
	I0318 04:40:30.860738   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:40:30.860748   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:40:30.884001   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:40:30.884015   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:40:30.896430   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:40:30.896446   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:40:30.900824   17465 logs.go:123] Gathering logs for kube-scheduler [d579e22e148e] ...
	I0318 04:40:30.900833   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d579e22e148e"
	I0318 04:40:30.916112   17465 logs.go:123] Gathering logs for kube-proxy [372e2774400e] ...
	I0318 04:40:30.916126   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 372e2774400e"
	I0318 04:40:30.928832   17465 logs.go:123] Gathering logs for kube-controller-manager [0be8d71b93fd] ...
	I0318 04:40:30.928843   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0be8d71b93fd"
	I0318 04:40:30.946917   17465 logs.go:123] Gathering logs for kube-controller-manager [fb25a67bf414] ...
	I0318 04:40:30.946927   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb25a67bf414"
	I0318 04:40:30.959067   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:40:30.959077   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:40:30.997334   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:40:30.997347   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:40:31.036446   17465 logs.go:123] Gathering logs for kube-apiserver [7dacaac7f891] ...
	I0318 04:40:31.036460   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dacaac7f891"
	I0318 04:40:31.076596   17465 logs.go:123] Gathering logs for etcd [8c963566b500] ...
	I0318 04:40:31.076610   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c963566b500"
	I0318 04:40:31.094227   17465 logs.go:123] Gathering logs for etcd [8b75879fc7bf] ...
	I0318 04:40:31.094241   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b75879fc7bf"
	I0318 04:40:31.108865   17465 logs.go:123] Gathering logs for storage-provisioner [9e2192628e0f] ...
	I0318 04:40:31.108876   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e2192628e0f"
	I0318 04:40:33.622735   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:40:38.624894   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:40:38.625144   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:40:38.649522   17465 logs.go:276] 2 containers: [32dc048c4476 7dacaac7f891]
	I0318 04:40:38.649647   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:40:38.664797   17465 logs.go:276] 2 containers: [8c963566b500 8b75879fc7bf]
	I0318 04:40:38.664872   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:40:38.681419   17465 logs.go:276] 1 containers: [b66f543335d1]
	I0318 04:40:38.681493   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:40:38.692238   17465 logs.go:276] 2 containers: [e8b1c1b2cd19 d579e22e148e]
	I0318 04:40:38.692313   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:40:38.703267   17465 logs.go:276] 1 containers: [372e2774400e]
	I0318 04:40:38.703341   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:40:38.718466   17465 logs.go:276] 2 containers: [0be8d71b93fd fb25a67bf414]
	I0318 04:40:38.718536   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:40:38.728822   17465 logs.go:276] 0 containers: []
	W0318 04:40:38.728834   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:40:38.728897   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:40:38.738979   17465 logs.go:276] 2 containers: [81ae28fc685b 9e2192628e0f]
	I0318 04:40:38.738998   17465 logs.go:123] Gathering logs for storage-provisioner [9e2192628e0f] ...
	I0318 04:40:38.739007   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e2192628e0f"
	I0318 04:40:38.753834   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:40:38.753846   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:40:38.766250   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:40:38.766263   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:40:38.805460   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:40:38.805473   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:40:38.844298   17465 logs.go:123] Gathering logs for etcd [8b75879fc7bf] ...
	I0318 04:40:38.844310   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b75879fc7bf"
	I0318 04:40:38.863524   17465 logs.go:123] Gathering logs for kube-scheduler [e8b1c1b2cd19] ...
	I0318 04:40:38.863535   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8b1c1b2cd19"
	I0318 04:40:38.875483   17465 logs.go:123] Gathering logs for storage-provisioner [81ae28fc685b] ...
	I0318 04:40:38.875494   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ae28fc685b"
	I0318 04:40:38.890477   17465 logs.go:123] Gathering logs for etcd [8c963566b500] ...
	I0318 04:40:38.890489   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c963566b500"
	I0318 04:40:38.905362   17465 logs.go:123] Gathering logs for kube-apiserver [7dacaac7f891] ...
	I0318 04:40:38.905376   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dacaac7f891"
	I0318 04:40:38.944056   17465 logs.go:123] Gathering logs for coredns [b66f543335d1] ...
	I0318 04:40:38.944072   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b66f543335d1"
	I0318 04:40:38.960434   17465 logs.go:123] Gathering logs for kube-scheduler [d579e22e148e] ...
	I0318 04:40:38.960446   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d579e22e148e"
	I0318 04:40:38.975899   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:40:38.975912   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:40:39.000035   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:40:39.000044   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:40:39.004293   17465 logs.go:123] Gathering logs for kube-apiserver [32dc048c4476] ...
	I0318 04:40:39.004302   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dc048c4476"
	I0318 04:40:39.018149   17465 logs.go:123] Gathering logs for kube-proxy [372e2774400e] ...
	I0318 04:40:39.018160   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 372e2774400e"
	I0318 04:40:39.032157   17465 logs.go:123] Gathering logs for kube-controller-manager [0be8d71b93fd] ...
	I0318 04:40:39.032171   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0be8d71b93fd"
	I0318 04:40:39.054366   17465 logs.go:123] Gathering logs for kube-controller-manager [fb25a67bf414] ...
	I0318 04:40:39.054377   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb25a67bf414"
	I0318 04:40:41.568209   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:40:46.570541   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:40:46.570919   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:40:46.599767   17465 logs.go:276] 2 containers: [32dc048c4476 7dacaac7f891]
	I0318 04:40:46.599896   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:40:46.618629   17465 logs.go:276] 2 containers: [8c963566b500 8b75879fc7bf]
	I0318 04:40:46.618717   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:40:46.631609   17465 logs.go:276] 1 containers: [b66f543335d1]
	I0318 04:40:46.631680   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:40:46.643294   17465 logs.go:276] 2 containers: [e8b1c1b2cd19 d579e22e148e]
	I0318 04:40:46.643359   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:40:46.653452   17465 logs.go:276] 1 containers: [372e2774400e]
	I0318 04:40:46.653515   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:40:46.664514   17465 logs.go:276] 2 containers: [0be8d71b93fd fb25a67bf414]
	I0318 04:40:46.664573   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:40:46.675220   17465 logs.go:276] 0 containers: []
	W0318 04:40:46.675232   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:40:46.675287   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:40:46.685779   17465 logs.go:276] 2 containers: [81ae28fc685b 9e2192628e0f]
	I0318 04:40:46.685796   17465 logs.go:123] Gathering logs for kube-apiserver [32dc048c4476] ...
	I0318 04:40:46.685801   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dc048c4476"
	I0318 04:40:46.699763   17465 logs.go:123] Gathering logs for etcd [8c963566b500] ...
	I0318 04:40:46.699774   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c963566b500"
	I0318 04:40:46.722872   17465 logs.go:123] Gathering logs for etcd [8b75879fc7bf] ...
	I0318 04:40:46.722886   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b75879fc7bf"
	I0318 04:40:46.737354   17465 logs.go:123] Gathering logs for kube-scheduler [e8b1c1b2cd19] ...
	I0318 04:40:46.737366   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8b1c1b2cd19"
	I0318 04:40:46.749175   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:40:46.749184   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:40:46.772119   17465 logs.go:123] Gathering logs for kube-apiserver [7dacaac7f891] ...
	I0318 04:40:46.772127   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dacaac7f891"
	I0318 04:40:46.809008   17465 logs.go:123] Gathering logs for coredns [b66f543335d1] ...
	I0318 04:40:46.809024   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b66f543335d1"
	I0318 04:40:46.821692   17465 logs.go:123] Gathering logs for storage-provisioner [81ae28fc685b] ...
	I0318 04:40:46.821703   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ae28fc685b"
	I0318 04:40:46.833365   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:40:46.833376   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:40:46.838174   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:40:46.838181   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:40:46.875300   17465 logs.go:123] Gathering logs for kube-controller-manager [fb25a67bf414] ...
	I0318 04:40:46.875311   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb25a67bf414"
	I0318 04:40:46.887463   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:40:46.887473   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:40:46.926736   17465 logs.go:123] Gathering logs for kube-scheduler [d579e22e148e] ...
	I0318 04:40:46.926746   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d579e22e148e"
	I0318 04:40:46.942068   17465 logs.go:123] Gathering logs for kube-proxy [372e2774400e] ...
	I0318 04:40:46.942080   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 372e2774400e"
	I0318 04:40:46.954310   17465 logs.go:123] Gathering logs for kube-controller-manager [0be8d71b93fd] ...
	I0318 04:40:46.954320   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0be8d71b93fd"
	I0318 04:40:46.971722   17465 logs.go:123] Gathering logs for storage-provisioner [9e2192628e0f] ...
	I0318 04:40:46.971735   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e2192628e0f"
	I0318 04:40:46.982902   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:40:46.982917   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:40:49.496071   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:40:54.498197   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:40:54.498316   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:40:54.509971   17465 logs.go:276] 2 containers: [32dc048c4476 7dacaac7f891]
	I0318 04:40:54.510042   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:40:54.520770   17465 logs.go:276] 2 containers: [8c963566b500 8b75879fc7bf]
	I0318 04:40:54.520848   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:40:54.532602   17465 logs.go:276] 1 containers: [b66f543335d1]
	I0318 04:40:54.532679   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:40:54.543156   17465 logs.go:276] 2 containers: [e8b1c1b2cd19 d579e22e148e]
	I0318 04:40:54.543232   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:40:54.556063   17465 logs.go:276] 1 containers: [372e2774400e]
	I0318 04:40:54.556138   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:40:54.566546   17465 logs.go:276] 2 containers: [0be8d71b93fd fb25a67bf414]
	I0318 04:40:54.566618   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:40:54.576577   17465 logs.go:276] 0 containers: []
	W0318 04:40:54.576588   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:40:54.576644   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:40:54.587011   17465 logs.go:276] 2 containers: [81ae28fc685b 9e2192628e0f]
	I0318 04:40:54.587028   17465 logs.go:123] Gathering logs for kube-apiserver [32dc048c4476] ...
	I0318 04:40:54.587034   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dc048c4476"
	I0318 04:40:54.605749   17465 logs.go:123] Gathering logs for storage-provisioner [9e2192628e0f] ...
	I0318 04:40:54.605760   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e2192628e0f"
	I0318 04:40:54.617359   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:40:54.617370   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:40:54.629100   17465 logs.go:123] Gathering logs for etcd [8b75879fc7bf] ...
	I0318 04:40:54.629114   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b75879fc7bf"
	I0318 04:40:54.644236   17465 logs.go:123] Gathering logs for coredns [b66f543335d1] ...
	I0318 04:40:54.644247   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b66f543335d1"
	I0318 04:40:54.655585   17465 logs.go:123] Gathering logs for kube-controller-manager [0be8d71b93fd] ...
	I0318 04:40:54.655595   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0be8d71b93fd"
	I0318 04:40:54.672368   17465 logs.go:123] Gathering logs for kube-controller-manager [fb25a67bf414] ...
	I0318 04:40:54.672379   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb25a67bf414"
	I0318 04:40:54.684983   17465 logs.go:123] Gathering logs for storage-provisioner [81ae28fc685b] ...
	I0318 04:40:54.684994   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ae28fc685b"
	I0318 04:40:54.696282   17465 logs.go:123] Gathering logs for etcd [8c963566b500] ...
	I0318 04:40:54.696293   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c963566b500"
	I0318 04:40:54.709816   17465 logs.go:123] Gathering logs for kube-scheduler [d579e22e148e] ...
	I0318 04:40:54.710806   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d579e22e148e"
	I0318 04:40:54.725414   17465 logs.go:123] Gathering logs for kube-proxy [372e2774400e] ...
	I0318 04:40:54.725425   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 372e2774400e"
	I0318 04:40:54.740509   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:40:54.740519   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:40:54.763591   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:40:54.763602   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:40:54.800844   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:40:54.800854   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:40:54.804872   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:40:54.804879   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:40:54.841604   17465 logs.go:123] Gathering logs for kube-apiserver [7dacaac7f891] ...
	I0318 04:40:54.841618   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dacaac7f891"
	I0318 04:40:54.880134   17465 logs.go:123] Gathering logs for kube-scheduler [e8b1c1b2cd19] ...
	I0318 04:40:54.880147   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8b1c1b2cd19"
	I0318 04:40:57.393947   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:41:02.396132   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:41:02.396342   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:41:02.414357   17465 logs.go:276] 2 containers: [32dc048c4476 7dacaac7f891]
	I0318 04:41:02.414452   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:41:02.427450   17465 logs.go:276] 2 containers: [8c963566b500 8b75879fc7bf]
	I0318 04:41:02.427527   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:41:02.439144   17465 logs.go:276] 1 containers: [b66f543335d1]
	I0318 04:41:02.439207   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:41:02.450273   17465 logs.go:276] 2 containers: [e8b1c1b2cd19 d579e22e148e]
	I0318 04:41:02.450339   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:41:02.461212   17465 logs.go:276] 1 containers: [372e2774400e]
	I0318 04:41:02.461284   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:41:02.471978   17465 logs.go:276] 2 containers: [0be8d71b93fd fb25a67bf414]
	I0318 04:41:02.472049   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:41:02.482410   17465 logs.go:276] 0 containers: []
	W0318 04:41:02.482422   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:41:02.482483   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:41:02.492711   17465 logs.go:276] 2 containers: [81ae28fc685b 9e2192628e0f]
	I0318 04:41:02.492733   17465 logs.go:123] Gathering logs for coredns [b66f543335d1] ...
	I0318 04:41:02.492740   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b66f543335d1"
	I0318 04:41:02.503802   17465 logs.go:123] Gathering logs for storage-provisioner [81ae28fc685b] ...
	I0318 04:41:02.503816   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ae28fc685b"
	I0318 04:41:02.515582   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:41:02.515593   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:41:02.538366   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:41:02.538377   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:41:02.542409   17465 logs.go:123] Gathering logs for kube-apiserver [7dacaac7f891] ...
	I0318 04:41:02.542417   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dacaac7f891"
	I0318 04:41:02.587553   17465 logs.go:123] Gathering logs for etcd [8b75879fc7bf] ...
	I0318 04:41:02.587567   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b75879fc7bf"
	I0318 04:41:02.602040   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:41:02.602051   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:41:02.614077   17465 logs.go:123] Gathering logs for kube-scheduler [d579e22e148e] ...
	I0318 04:41:02.614089   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d579e22e148e"
	I0318 04:41:02.628943   17465 logs.go:123] Gathering logs for kube-proxy [372e2774400e] ...
	I0318 04:41:02.628954   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 372e2774400e"
	I0318 04:41:02.640815   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:41:02.640826   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:41:02.679088   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:41:02.679096   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:41:02.712528   17465 logs.go:123] Gathering logs for kube-apiserver [32dc048c4476] ...
	I0318 04:41:02.712540   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dc048c4476"
	I0318 04:41:02.726406   17465 logs.go:123] Gathering logs for etcd [8c963566b500] ...
	I0318 04:41:02.726420   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c963566b500"
	I0318 04:41:02.746463   17465 logs.go:123] Gathering logs for kube-scheduler [e8b1c1b2cd19] ...
	I0318 04:41:02.746474   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8b1c1b2cd19"
	I0318 04:41:02.758187   17465 logs.go:123] Gathering logs for kube-controller-manager [0be8d71b93fd] ...
	I0318 04:41:02.758199   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0be8d71b93fd"
	I0318 04:41:02.776181   17465 logs.go:123] Gathering logs for kube-controller-manager [fb25a67bf414] ...
	I0318 04:41:02.776192   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb25a67bf414"
	I0318 04:41:02.788221   17465 logs.go:123] Gathering logs for storage-provisioner [9e2192628e0f] ...
	I0318 04:41:02.788232   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e2192628e0f"
	I0318 04:41:05.299600   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:41:10.301742   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:41:10.301929   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:41:10.316717   17465 logs.go:276] 2 containers: [32dc048c4476 7dacaac7f891]
	I0318 04:41:10.316806   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:41:10.328415   17465 logs.go:276] 2 containers: [8c963566b500 8b75879fc7bf]
	I0318 04:41:10.328486   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:41:10.338768   17465 logs.go:276] 1 containers: [b66f543335d1]
	I0318 04:41:10.338845   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:41:10.349345   17465 logs.go:276] 2 containers: [e8b1c1b2cd19 d579e22e148e]
	I0318 04:41:10.349417   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:41:10.360115   17465 logs.go:276] 1 containers: [372e2774400e]
	I0318 04:41:10.360186   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:41:10.371023   17465 logs.go:276] 2 containers: [0be8d71b93fd fb25a67bf414]
	I0318 04:41:10.371093   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:41:10.381970   17465 logs.go:276] 0 containers: []
	W0318 04:41:10.381982   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:41:10.382049   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:41:10.392835   17465 logs.go:276] 2 containers: [81ae28fc685b 9e2192628e0f]
	I0318 04:41:10.392852   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:41:10.392857   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:41:10.428670   17465 logs.go:123] Gathering logs for etcd [8c963566b500] ...
	I0318 04:41:10.428688   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c963566b500"
	I0318 04:41:10.442940   17465 logs.go:123] Gathering logs for kube-scheduler [e8b1c1b2cd19] ...
	I0318 04:41:10.442950   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8b1c1b2cd19"
	I0318 04:41:10.454657   17465 logs.go:123] Gathering logs for kube-scheduler [d579e22e148e] ...
	I0318 04:41:10.454670   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d579e22e148e"
	I0318 04:41:10.469266   17465 logs.go:123] Gathering logs for kube-apiserver [7dacaac7f891] ...
	I0318 04:41:10.469275   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dacaac7f891"
	I0318 04:41:10.507827   17465 logs.go:123] Gathering logs for coredns [b66f543335d1] ...
	I0318 04:41:10.507842   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b66f543335d1"
	I0318 04:41:10.519249   17465 logs.go:123] Gathering logs for etcd [8b75879fc7bf] ...
	I0318 04:41:10.519261   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b75879fc7bf"
	I0318 04:41:10.533349   17465 logs.go:123] Gathering logs for kube-controller-manager [0be8d71b93fd] ...
	I0318 04:41:10.533359   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0be8d71b93fd"
	I0318 04:41:10.551478   17465 logs.go:123] Gathering logs for kube-controller-manager [fb25a67bf414] ...
	I0318 04:41:10.551489   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb25a67bf414"
	I0318 04:41:10.565430   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:41:10.565440   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:41:10.588274   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:41:10.588282   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:41:10.601123   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:41:10.601133   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:41:10.638491   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:41:10.638500   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:41:10.642384   17465 logs.go:123] Gathering logs for kube-apiserver [32dc048c4476] ...
	I0318 04:41:10.642392   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dc048c4476"
	I0318 04:41:10.655836   17465 logs.go:123] Gathering logs for kube-proxy [372e2774400e] ...
	I0318 04:41:10.655846   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 372e2774400e"
	I0318 04:41:10.672120   17465 logs.go:123] Gathering logs for storage-provisioner [81ae28fc685b] ...
	I0318 04:41:10.672130   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ae28fc685b"
	I0318 04:41:10.683803   17465 logs.go:123] Gathering logs for storage-provisioner [9e2192628e0f] ...
	I0318 04:41:10.683817   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e2192628e0f"
	I0318 04:41:13.197026   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:41:18.199346   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:41:18.199715   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:41:18.234275   17465 logs.go:276] 2 containers: [32dc048c4476 7dacaac7f891]
	I0318 04:41:18.234406   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:41:18.252118   17465 logs.go:276] 2 containers: [8c963566b500 8b75879fc7bf]
	I0318 04:41:18.252205   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:41:18.266043   17465 logs.go:276] 1 containers: [b66f543335d1]
	I0318 04:41:18.266122   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:41:18.277493   17465 logs.go:276] 2 containers: [e8b1c1b2cd19 d579e22e148e]
	I0318 04:41:18.277577   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:41:18.287715   17465 logs.go:276] 1 containers: [372e2774400e]
	I0318 04:41:18.287785   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:41:18.298401   17465 logs.go:276] 2 containers: [0be8d71b93fd fb25a67bf414]
	I0318 04:41:18.298466   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:41:18.308373   17465 logs.go:276] 0 containers: []
	W0318 04:41:18.308384   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:41:18.308439   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:41:18.319799   17465 logs.go:276] 2 containers: [81ae28fc685b 9e2192628e0f]
	I0318 04:41:18.319837   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:41:18.319843   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:41:18.324605   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:41:18.324613   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:41:18.360782   17465 logs.go:123] Gathering logs for kube-scheduler [e8b1c1b2cd19] ...
	I0318 04:41:18.360794   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8b1c1b2cd19"
	I0318 04:41:18.374845   17465 logs.go:123] Gathering logs for kube-proxy [372e2774400e] ...
	I0318 04:41:18.374859   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 372e2774400e"
	I0318 04:41:18.402473   17465 logs.go:123] Gathering logs for storage-provisioner [9e2192628e0f] ...
	I0318 04:41:18.402488   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e2192628e0f"
	I0318 04:41:18.424473   17465 logs.go:123] Gathering logs for kube-apiserver [32dc048c4476] ...
	I0318 04:41:18.424487   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dc048c4476"
	I0318 04:41:18.441991   17465 logs.go:123] Gathering logs for kube-controller-manager [0be8d71b93fd] ...
	I0318 04:41:18.442002   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0be8d71b93fd"
	I0318 04:41:18.459812   17465 logs.go:123] Gathering logs for kube-controller-manager [fb25a67bf414] ...
	I0318 04:41:18.459824   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb25a67bf414"
	I0318 04:41:18.472275   17465 logs.go:123] Gathering logs for storage-provisioner [81ae28fc685b] ...
	I0318 04:41:18.472288   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ae28fc685b"
	I0318 04:41:18.483351   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:41:18.483360   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:41:18.495353   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:41:18.495365   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:41:18.531963   17465 logs.go:123] Gathering logs for kube-apiserver [7dacaac7f891] ...
	I0318 04:41:18.531971   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dacaac7f891"
	I0318 04:41:18.568755   17465 logs.go:123] Gathering logs for etcd [8b75879fc7bf] ...
	I0318 04:41:18.568771   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b75879fc7bf"
	I0318 04:41:18.586300   17465 logs.go:123] Gathering logs for coredns [b66f543335d1] ...
	I0318 04:41:18.586311   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b66f543335d1"
	I0318 04:41:18.597519   17465 logs.go:123] Gathering logs for etcd [8c963566b500] ...
	I0318 04:41:18.597530   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c963566b500"
	I0318 04:41:18.611898   17465 logs.go:123] Gathering logs for kube-scheduler [d579e22e148e] ...
	I0318 04:41:18.611909   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d579e22e148e"
	I0318 04:41:18.627536   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:41:18.627552   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:41:21.152763   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:41:26.154796   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:41:26.154913   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:41:26.166051   17465 logs.go:276] 2 containers: [32dc048c4476 7dacaac7f891]
	I0318 04:41:26.166130   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:41:26.178249   17465 logs.go:276] 2 containers: [8c963566b500 8b75879fc7bf]
	I0318 04:41:26.178325   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:41:26.188754   17465 logs.go:276] 1 containers: [b66f543335d1]
	I0318 04:41:26.188827   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:41:26.198933   17465 logs.go:276] 2 containers: [e8b1c1b2cd19 d579e22e148e]
	I0318 04:41:26.199001   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:41:26.209164   17465 logs.go:276] 1 containers: [372e2774400e]
	I0318 04:41:26.209229   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:41:26.219584   17465 logs.go:276] 2 containers: [0be8d71b93fd fb25a67bf414]
	I0318 04:41:26.219656   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:41:26.229848   17465 logs.go:276] 0 containers: []
	W0318 04:41:26.229858   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:41:26.229917   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:41:26.246030   17465 logs.go:276] 2 containers: [81ae28fc685b 9e2192628e0f]
	I0318 04:41:26.246060   17465 logs.go:123] Gathering logs for kube-apiserver [32dc048c4476] ...
	I0318 04:41:26.246066   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 32dc048c4476"
	I0318 04:41:26.260067   17465 logs.go:123] Gathering logs for kube-scheduler [d579e22e148e] ...
	I0318 04:41:26.260078   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d579e22e148e"
	I0318 04:41:26.275122   17465 logs.go:123] Gathering logs for storage-provisioner [9e2192628e0f] ...
	I0318 04:41:26.275135   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9e2192628e0f"
	I0318 04:41:26.286580   17465 logs.go:123] Gathering logs for coredns [b66f543335d1] ...
	I0318 04:41:26.286590   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b66f543335d1"
	I0318 04:41:26.297637   17465 logs.go:123] Gathering logs for kube-scheduler [e8b1c1b2cd19] ...
	I0318 04:41:26.297650   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e8b1c1b2cd19"
	I0318 04:41:26.309161   17465 logs.go:123] Gathering logs for kube-controller-manager [0be8d71b93fd] ...
	I0318 04:41:26.309174   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0be8d71b93fd"
	I0318 04:41:26.327246   17465 logs.go:123] Gathering logs for kube-controller-manager [fb25a67bf414] ...
	I0318 04:41:26.327259   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fb25a67bf414"
	I0318 04:41:26.339306   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:41:26.339324   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:41:26.377265   17465 logs.go:123] Gathering logs for etcd [8c963566b500] ...
	I0318 04:41:26.377274   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8c963566b500"
	I0318 04:41:26.391181   17465 logs.go:123] Gathering logs for etcd [8b75879fc7bf] ...
	I0318 04:41:26.391193   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b75879fc7bf"
	I0318 04:41:26.410548   17465 logs.go:123] Gathering logs for kube-apiserver [7dacaac7f891] ...
	I0318 04:41:26.410558   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7dacaac7f891"
	I0318 04:41:26.447864   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:41:26.447875   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:41:26.460011   17465 logs.go:123] Gathering logs for storage-provisioner [81ae28fc685b] ...
	I0318 04:41:26.460022   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 81ae28fc685b"
	I0318 04:41:26.471716   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:41:26.471728   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:41:26.493593   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:41:26.493603   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:41:26.497487   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:41:26.497493   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:41:26.533083   17465 logs.go:123] Gathering logs for kube-proxy [372e2774400e] ...
	I0318 04:41:26.533096   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 372e2774400e"
	I0318 04:41:29.045576   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:41:34.047622   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:41:34.047791   17465 kubeadm.go:591] duration metric: took 4m3.88363675s to restartPrimaryControlPlane
	W0318 04:41:34.047905   17465 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 04:41:34.047949   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0318 04:41:35.115431   17465 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.06750525s)
	I0318 04:41:35.115501   17465 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 04:41:35.121130   17465 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 04:41:35.124126   17465 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 04:41:35.127044   17465 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 04:41:35.127050   17465 kubeadm.go:156] found existing configuration files:
	
	I0318 04:41:35.127077   17465 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53534 /etc/kubernetes/admin.conf
	I0318 04:41:35.129657   17465 kubeadm.go:162] "https://control-plane.minikube.internal:53534" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:53534 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 04:41:35.129685   17465 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 04:41:35.132169   17465 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53534 /etc/kubernetes/kubelet.conf
	I0318 04:41:35.134821   17465 kubeadm.go:162] "https://control-plane.minikube.internal:53534" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:53534 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 04:41:35.134847   17465 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 04:41:35.137300   17465 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53534 /etc/kubernetes/controller-manager.conf
	I0318 04:41:35.139995   17465 kubeadm.go:162] "https://control-plane.minikube.internal:53534" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:53534 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 04:41:35.140021   17465 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 04:41:35.142973   17465 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:53534 /etc/kubernetes/scheduler.conf
	I0318 04:41:35.145470   17465 kubeadm.go:162] "https://control-plane.minikube.internal:53534" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:53534 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 04:41:35.145495   17465 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 04:41:35.148093   17465 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 04:41:35.165406   17465 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0318 04:41:35.165435   17465 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 04:41:35.216267   17465 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 04:41:35.216419   17465 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 04:41:35.216482   17465 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 04:41:35.268010   17465 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 04:41:35.271247   17465 out.go:204]   - Generating certificates and keys ...
	I0318 04:41:35.271282   17465 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 04:41:35.271310   17465 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 04:41:35.271345   17465 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 04:41:35.271371   17465 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 04:41:35.271402   17465 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 04:41:35.271426   17465 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 04:41:35.271454   17465 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 04:41:35.271481   17465 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 04:41:35.271514   17465 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 04:41:35.271547   17465 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 04:41:35.271564   17465 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 04:41:35.271589   17465 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 04:41:35.329023   17465 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 04:41:35.491546   17465 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 04:41:35.797551   17465 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 04:41:35.953699   17465 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 04:41:35.986737   17465 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 04:41:35.987097   17465 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 04:41:35.987152   17465 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 04:41:36.080368   17465 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 04:41:36.084497   17465 out.go:204]   - Booting up control plane ...
	I0318 04:41:36.084542   17465 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 04:41:36.084588   17465 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 04:41:36.084621   17465 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 04:41:36.084668   17465 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 04:41:36.084823   17465 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 04:41:40.586637   17465 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.501843 seconds
	I0318 04:41:40.586703   17465 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 04:41:40.590734   17465 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 04:41:41.100940   17465 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 04:41:41.101236   17465 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-126000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 04:41:41.609427   17465 kubeadm.go:309] [bootstrap-token] Using token: arcfy8.vmv9i1qd2i42rxej
	I0318 04:41:41.614239   17465 out.go:204]   - Configuring RBAC rules ...
	I0318 04:41:41.614312   17465 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 04:41:41.616653   17465 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 04:41:41.622031   17465 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 04:41:41.623046   17465 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 04:41:41.624185   17465 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 04:41:41.625084   17465 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 04:41:41.628983   17465 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 04:41:41.810048   17465 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 04:41:42.018465   17465 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 04:41:42.018923   17465 kubeadm.go:309] 
	I0318 04:41:42.018955   17465 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 04:41:42.018963   17465 kubeadm.go:309] 
	I0318 04:41:42.019012   17465 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 04:41:42.019018   17465 kubeadm.go:309] 
	I0318 04:41:42.019037   17465 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 04:41:42.019071   17465 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 04:41:42.019101   17465 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 04:41:42.019107   17465 kubeadm.go:309] 
	I0318 04:41:42.019137   17465 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 04:41:42.019140   17465 kubeadm.go:309] 
	I0318 04:41:42.019165   17465 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 04:41:42.019169   17465 kubeadm.go:309] 
	I0318 04:41:42.019198   17465 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 04:41:42.019243   17465 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 04:41:42.019281   17465 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 04:41:42.019286   17465 kubeadm.go:309] 
	I0318 04:41:42.019341   17465 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 04:41:42.019388   17465 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 04:41:42.019393   17465 kubeadm.go:309] 
	I0318 04:41:42.019453   17465 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token arcfy8.vmv9i1qd2i42rxej \
	I0318 04:41:42.019507   17465 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:2762dffea2ede86231df0e7bc748eefca9b65ca5bd96e5f605bd5b60ef0281dd \
	I0318 04:41:42.019520   17465 kubeadm.go:309] 	--control-plane 
	I0318 04:41:42.019529   17465 kubeadm.go:309] 
	I0318 04:41:42.019578   17465 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 04:41:42.019582   17465 kubeadm.go:309] 
	I0318 04:41:42.019622   17465 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token arcfy8.vmv9i1qd2i42rxej \
	I0318 04:41:42.019677   17465 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:2762dffea2ede86231df0e7bc748eefca9b65ca5bd96e5f605bd5b60ef0281dd 
	I0318 04:41:42.019790   17465 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 04:41:42.019855   17465 cni.go:84] Creating CNI manager for ""
	I0318 04:41:42.019864   17465 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:41:42.023536   17465 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 04:41:42.030799   17465 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 04:41:42.033923   17465 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 04:41:42.039310   17465 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 04:41:42.039352   17465 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 04:41:42.039374   17465 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-126000 minikube.k8s.io/updated_at=2024_03_18T04_41_42_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a minikube.k8s.io/name=stopped-upgrade-126000 minikube.k8s.io/primary=true
	I0318 04:41:42.080517   17465 kubeadm.go:1107] duration metric: took 41.209959ms to wait for elevateKubeSystemPrivileges
	I0318 04:41:42.080565   17465 ops.go:34] apiserver oom_adj: -16
	W0318 04:41:42.080580   17465 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 04:41:42.080584   17465 kubeadm.go:393] duration metric: took 4m11.930901333s to StartCluster
	I0318 04:41:42.080594   17465 settings.go:142] acquiring lock: {Name:mk8634ba9e118796c1213288fbf27edefcbb67ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:41:42.080688   17465 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:41:42.081124   17465 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18429-15072/kubeconfig: {Name:mkeb86e27ccdf30a065b43661cfe2af2dc198b46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:41:42.081345   17465 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:41:42.085741   17465 out.go:177] * Verifying Kubernetes components...
	I0318 04:41:42.081390   17465 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 04:41:42.081443   17465 config.go:182] Loaded profile config "stopped-upgrade-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 04:41:42.093720   17465 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 04:41:42.093740   17465 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-126000"
	I0318 04:41:42.093752   17465 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-126000"
	I0318 04:41:42.093755   17465 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-126000"
	I0318 04:41:42.093769   17465 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-126000"
	W0318 04:41:42.093775   17465 addons.go:243] addon storage-provisioner should already be in state true
	I0318 04:41:42.093786   17465 host.go:66] Checking if "stopped-upgrade-126000" exists ...
	I0318 04:41:42.095439   17465 kapi.go:59] client config for stopped-upgrade-126000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/stopped-upgrade-126000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/stopped-upgrade-126000/client.key", CAFile:"/Users/jenkins/minikube-integration/18429-15072/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x103d62a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 04:41:42.095553   17465 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-126000"
	W0318 04:41:42.095560   17465 addons.go:243] addon default-storageclass should already be in state true
	I0318 04:41:42.095568   17465 host.go:66] Checking if "stopped-upgrade-126000" exists ...
	I0318 04:41:42.100732   17465 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 04:41:42.104709   17465 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 04:41:42.104716   17465 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 04:41:42.104724   17465 sshutil.go:53] new ssh client: &{IP:localhost Port:53501 SSHKeyPath:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/stopped-upgrade-126000/id_rsa Username:docker}
	I0318 04:41:42.105366   17465 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 04:41:42.105370   17465 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 04:41:42.105374   17465 sshutil.go:53] new ssh client: &{IP:localhost Port:53501 SSHKeyPath:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/stopped-upgrade-126000/id_rsa Username:docker}
	I0318 04:41:42.186441   17465 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 04:41:42.191558   17465 api_server.go:52] waiting for apiserver process to appear ...
	I0318 04:41:42.191607   17465 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 04:41:42.195230   17465 api_server.go:72] duration metric: took 113.877666ms to wait for apiserver process to appear ...
	I0318 04:41:42.195237   17465 api_server.go:88] waiting for apiserver healthz status ...
	I0318 04:41:42.195244   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:41:42.221682   17465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 04:41:42.223713   17465 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 04:41:47.197157   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:41:47.197200   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:41:52.197383   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:41:52.197417   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:41:57.197644   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:41:57.197679   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:42:02.198000   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:42:02.198042   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:42:07.198574   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:42:07.198628   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:42:12.199278   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:42:12.199325   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0318 04:42:12.596624   17465 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0318 04:42:12.605006   17465 out.go:177] * Enabled addons: storage-provisioner
	I0318 04:42:12.612922   17465 addons.go:505] duration metric: took 30.532546083s for enable addons: enabled=[storage-provisioner]
	I0318 04:42:17.200254   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:42:17.200288   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:42:22.201562   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:42:22.201580   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:42:27.203018   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:42:27.203040   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:42:32.204875   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:42:32.204898   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:42:37.206928   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:42:37.206985   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:42:42.208353   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:42:42.208618   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:42:42.240903   17465 logs.go:276] 1 containers: [0a2982ffb84e]
	I0318 04:42:42.240995   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:42:42.266361   17465 logs.go:276] 1 containers: [704a79c3c784]
	I0318 04:42:42.266451   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:42:42.282396   17465 logs.go:276] 2 containers: [fe69be91e435 828d4a376c7e]
	I0318 04:42:42.282465   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:42:42.296769   17465 logs.go:276] 1 containers: [e9fc948a1004]
	I0318 04:42:42.296839   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:42:42.307412   17465 logs.go:276] 1 containers: [894142fdaac1]
	I0318 04:42:42.307482   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:42:42.318375   17465 logs.go:276] 1 containers: [1c9856b2b94f]
	I0318 04:42:42.318445   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:42:42.328466   17465 logs.go:276] 0 containers: []
	W0318 04:42:42.328481   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:42:42.328537   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:42:42.339019   17465 logs.go:276] 1 containers: [a247b21e5185]
	I0318 04:42:42.339036   17465 logs.go:123] Gathering logs for kube-proxy [894142fdaac1] ...
	I0318 04:42:42.339042   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 894142fdaac1"
	I0318 04:42:42.351214   17465 logs.go:123] Gathering logs for storage-provisioner [a247b21e5185] ...
	I0318 04:42:42.351225   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a247b21e5185"
	I0318 04:42:42.363213   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:42:42.363224   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:42:42.388741   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:42:42.388749   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:42:42.402319   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:42:42.402329   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:42:42.439529   17465 logs.go:123] Gathering logs for coredns [828d4a376c7e] ...
	I0318 04:42:42.439540   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828d4a376c7e"
	I0318 04:42:42.451262   17465 logs.go:123] Gathering logs for kube-scheduler [e9fc948a1004] ...
	I0318 04:42:42.451273   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fc948a1004"
	I0318 04:42:42.467646   17465 logs.go:123] Gathering logs for etcd [704a79c3c784] ...
	I0318 04:42:42.467656   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 704a79c3c784"
	I0318 04:42:42.481749   17465 logs.go:123] Gathering logs for coredns [fe69be91e435] ...
	I0318 04:42:42.481758   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe69be91e435"
	I0318 04:42:42.493026   17465 logs.go:123] Gathering logs for kube-controller-manager [1c9856b2b94f] ...
	I0318 04:42:42.493037   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c9856b2b94f"
	I0318 04:42:42.511836   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:42:42.511846   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:42:42.546724   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:42:42.546735   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:42:42.551145   17465 logs.go:123] Gathering logs for kube-apiserver [0a2982ffb84e] ...
	I0318 04:42:42.551152   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2982ffb84e"
	I0318 04:42:45.067783   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:42:50.068193   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:42:50.068469   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:42:50.094218   17465 logs.go:276] 1 containers: [0a2982ffb84e]
	I0318 04:42:50.094352   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:42:50.110850   17465 logs.go:276] 1 containers: [704a79c3c784]
	I0318 04:42:50.110932   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:42:50.123366   17465 logs.go:276] 2 containers: [fe69be91e435 828d4a376c7e]
	I0318 04:42:50.123437   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:42:50.137071   17465 logs.go:276] 1 containers: [e9fc948a1004]
	I0318 04:42:50.137144   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:42:50.147548   17465 logs.go:276] 1 containers: [894142fdaac1]
	I0318 04:42:50.147621   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:42:50.158150   17465 logs.go:276] 1 containers: [1c9856b2b94f]
	I0318 04:42:50.158222   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:42:50.168284   17465 logs.go:276] 0 containers: []
	W0318 04:42:50.168295   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:42:50.168354   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:42:50.179958   17465 logs.go:276] 1 containers: [a247b21e5185]
	I0318 04:42:50.179973   17465 logs.go:123] Gathering logs for kube-proxy [894142fdaac1] ...
	I0318 04:42:50.179978   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 894142fdaac1"
	I0318 04:42:50.198952   17465 logs.go:123] Gathering logs for kube-controller-manager [1c9856b2b94f] ...
	I0318 04:42:50.198963   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c9856b2b94f"
	I0318 04:42:50.217173   17465 logs.go:123] Gathering logs for storage-provisioner [a247b21e5185] ...
	I0318 04:42:50.217182   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a247b21e5185"
	I0318 04:42:50.228348   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:42:50.228358   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:42:50.232577   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:42:50.232585   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:42:50.268556   17465 logs.go:123] Gathering logs for etcd [704a79c3c784] ...
	I0318 04:42:50.268568   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 704a79c3c784"
	I0318 04:42:50.282486   17465 logs.go:123] Gathering logs for coredns [fe69be91e435] ...
	I0318 04:42:50.282495   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe69be91e435"
	I0318 04:42:50.295500   17465 logs.go:123] Gathering logs for kube-scheduler [e9fc948a1004] ...
	I0318 04:42:50.295511   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fc948a1004"
	I0318 04:42:50.310633   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:42:50.310643   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:42:50.334246   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:42:50.334254   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:42:50.368667   17465 logs.go:123] Gathering logs for kube-apiserver [0a2982ffb84e] ...
	I0318 04:42:50.368675   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2982ffb84e"
	I0318 04:42:50.382568   17465 logs.go:123] Gathering logs for coredns [828d4a376c7e] ...
	I0318 04:42:50.382576   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828d4a376c7e"
	I0318 04:42:50.394091   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:42:50.394101   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:42:52.910192   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:42:57.912363   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:42:57.912544   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:42:57.928575   17465 logs.go:276] 1 containers: [0a2982ffb84e]
	I0318 04:42:57.928659   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:42:57.941215   17465 logs.go:276] 1 containers: [704a79c3c784]
	I0318 04:42:57.941291   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:42:57.951905   17465 logs.go:276] 2 containers: [fe69be91e435 828d4a376c7e]
	I0318 04:42:57.951975   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:42:57.962217   17465 logs.go:276] 1 containers: [e9fc948a1004]
	I0318 04:42:57.962287   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:42:57.974152   17465 logs.go:276] 1 containers: [894142fdaac1]
	I0318 04:42:57.974226   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:42:57.984700   17465 logs.go:276] 1 containers: [1c9856b2b94f]
	I0318 04:42:57.984761   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:42:57.994538   17465 logs.go:276] 0 containers: []
	W0318 04:42:57.994554   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:42:57.994612   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:42:58.005947   17465 logs.go:276] 1 containers: [a247b21e5185]
	I0318 04:42:58.005963   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:42:58.005968   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:42:58.039524   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:42:58.039534   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:42:58.043932   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:42:58.043938   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:42:58.078438   17465 logs.go:123] Gathering logs for coredns [fe69be91e435] ...
	I0318 04:42:58.078449   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe69be91e435"
	I0318 04:42:58.090099   17465 logs.go:123] Gathering logs for coredns [828d4a376c7e] ...
	I0318 04:42:58.090110   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828d4a376c7e"
	I0318 04:42:58.101836   17465 logs.go:123] Gathering logs for kube-scheduler [e9fc948a1004] ...
	I0318 04:42:58.101847   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fc948a1004"
	I0318 04:42:58.117733   17465 logs.go:123] Gathering logs for kube-proxy [894142fdaac1] ...
	I0318 04:42:58.117743   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 894142fdaac1"
	I0318 04:42:58.129704   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:42:58.129713   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:42:58.153621   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:42:58.153631   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:42:58.164933   17465 logs.go:123] Gathering logs for kube-apiserver [0a2982ffb84e] ...
	I0318 04:42:58.164948   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2982ffb84e"
	I0318 04:42:58.179658   17465 logs.go:123] Gathering logs for etcd [704a79c3c784] ...
	I0318 04:42:58.179667   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 704a79c3c784"
	I0318 04:42:58.193707   17465 logs.go:123] Gathering logs for kube-controller-manager [1c9856b2b94f] ...
	I0318 04:42:58.193717   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c9856b2b94f"
	I0318 04:42:58.211098   17465 logs.go:123] Gathering logs for storage-provisioner [a247b21e5185] ...
	I0318 04:42:58.211109   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a247b21e5185"
	I0318 04:43:00.727996   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:43:05.730444   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:43:05.730714   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:43:05.758729   17465 logs.go:276] 1 containers: [0a2982ffb84e]
	I0318 04:43:05.758856   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:43:05.776578   17465 logs.go:276] 1 containers: [704a79c3c784]
	I0318 04:43:05.776673   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:43:05.790738   17465 logs.go:276] 2 containers: [fe69be91e435 828d4a376c7e]
	I0318 04:43:05.790816   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:43:05.802617   17465 logs.go:276] 1 containers: [e9fc948a1004]
	I0318 04:43:05.802691   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:43:05.812952   17465 logs.go:276] 1 containers: [894142fdaac1]
	I0318 04:43:05.813023   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:43:05.823655   17465 logs.go:276] 1 containers: [1c9856b2b94f]
	I0318 04:43:05.823774   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:43:05.834659   17465 logs.go:276] 0 containers: []
	W0318 04:43:05.834668   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:43:05.834720   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:43:05.845285   17465 logs.go:276] 1 containers: [a247b21e5185]
	I0318 04:43:05.845296   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:43:05.845301   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:43:05.849452   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:43:05.849459   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:43:05.886344   17465 logs.go:123] Gathering logs for coredns [fe69be91e435] ...
	I0318 04:43:05.886359   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe69be91e435"
	I0318 04:43:05.900156   17465 logs.go:123] Gathering logs for storage-provisioner [a247b21e5185] ...
	I0318 04:43:05.900173   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a247b21e5185"
	I0318 04:43:05.912033   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:43:05.912049   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:43:05.923308   17465 logs.go:123] Gathering logs for kube-controller-manager [1c9856b2b94f] ...
	I0318 04:43:05.923319   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c9856b2b94f"
	I0318 04:43:05.941010   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:43:05.941024   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:43:05.964360   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:43:05.964367   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:43:05.998320   17465 logs.go:123] Gathering logs for kube-apiserver [0a2982ffb84e] ...
	I0318 04:43:05.998329   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2982ffb84e"
	I0318 04:43:06.012728   17465 logs.go:123] Gathering logs for etcd [704a79c3c784] ...
	I0318 04:43:06.012737   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 704a79c3c784"
	I0318 04:43:06.029083   17465 logs.go:123] Gathering logs for coredns [828d4a376c7e] ...
	I0318 04:43:06.029098   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828d4a376c7e"
	I0318 04:43:06.041810   17465 logs.go:123] Gathering logs for kube-scheduler [e9fc948a1004] ...
	I0318 04:43:06.041820   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fc948a1004"
	I0318 04:43:06.060539   17465 logs.go:123] Gathering logs for kube-proxy [894142fdaac1] ...
	I0318 04:43:06.060552   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 894142fdaac1"
	I0318 04:43:08.572675   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:43:13.573720   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:43:13.574099   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:43:13.602715   17465 logs.go:276] 1 containers: [0a2982ffb84e]
	I0318 04:43:13.602837   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:43:13.620339   17465 logs.go:276] 1 containers: [704a79c3c784]
	I0318 04:43:13.620437   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:43:13.634873   17465 logs.go:276] 2 containers: [fe69be91e435 828d4a376c7e]
	I0318 04:43:13.634936   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:43:13.646463   17465 logs.go:276] 1 containers: [e9fc948a1004]
	I0318 04:43:13.646515   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:43:13.656617   17465 logs.go:276] 1 containers: [894142fdaac1]
	I0318 04:43:13.656677   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:43:13.670396   17465 logs.go:276] 1 containers: [1c9856b2b94f]
	I0318 04:43:13.670451   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:43:13.679845   17465 logs.go:276] 0 containers: []
	W0318 04:43:13.679856   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:43:13.679904   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:43:13.690101   17465 logs.go:276] 1 containers: [a247b21e5185]
	I0318 04:43:13.690114   17465 logs.go:123] Gathering logs for kube-proxy [894142fdaac1] ...
	I0318 04:43:13.690119   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 894142fdaac1"
	I0318 04:43:13.701985   17465 logs.go:123] Gathering logs for kube-controller-manager [1c9856b2b94f] ...
	I0318 04:43:13.701997   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c9856b2b94f"
	I0318 04:43:13.719683   17465 logs.go:123] Gathering logs for storage-provisioner [a247b21e5185] ...
	I0318 04:43:13.719695   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a247b21e5185"
	I0318 04:43:13.730966   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:43:13.730978   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:43:13.765104   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:43:13.765112   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:43:13.769033   17465 logs.go:123] Gathering logs for kube-apiserver [0a2982ffb84e] ...
	I0318 04:43:13.769039   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2982ffb84e"
	I0318 04:43:13.782746   17465 logs.go:123] Gathering logs for coredns [828d4a376c7e] ...
	I0318 04:43:13.782757   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828d4a376c7e"
	I0318 04:43:13.793610   17465 logs.go:123] Gathering logs for kube-scheduler [e9fc948a1004] ...
	I0318 04:43:13.793622   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fc948a1004"
	I0318 04:43:13.808859   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:43:13.808870   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:43:13.841919   17465 logs.go:123] Gathering logs for etcd [704a79c3c784] ...
	I0318 04:43:13.841929   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 704a79c3c784"
	I0318 04:43:13.855923   17465 logs.go:123] Gathering logs for coredns [fe69be91e435] ...
	I0318 04:43:13.855932   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe69be91e435"
	I0318 04:43:13.867735   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:43:13.867748   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:43:13.891684   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:43:13.891694   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:43:16.405033   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:43:21.406880   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:43:21.407197   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:43:21.436858   17465 logs.go:276] 1 containers: [0a2982ffb84e]
	I0318 04:43:21.436978   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:43:21.455208   17465 logs.go:276] 1 containers: [704a79c3c784]
	I0318 04:43:21.455298   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:43:21.469326   17465 logs.go:276] 2 containers: [fe69be91e435 828d4a376c7e]
	I0318 04:43:21.469404   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:43:21.480907   17465 logs.go:276] 1 containers: [e9fc948a1004]
	I0318 04:43:21.480978   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:43:21.491403   17465 logs.go:276] 1 containers: [894142fdaac1]
	I0318 04:43:21.491466   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:43:21.502592   17465 logs.go:276] 1 containers: [1c9856b2b94f]
	I0318 04:43:21.502663   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:43:21.512789   17465 logs.go:276] 0 containers: []
	W0318 04:43:21.512798   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:43:21.512847   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:43:21.527702   17465 logs.go:276] 1 containers: [a247b21e5185]
	I0318 04:43:21.527718   17465 logs.go:123] Gathering logs for kube-apiserver [0a2982ffb84e] ...
	I0318 04:43:21.527724   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2982ffb84e"
	I0318 04:43:21.541862   17465 logs.go:123] Gathering logs for kube-proxy [894142fdaac1] ...
	I0318 04:43:21.541872   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 894142fdaac1"
	I0318 04:43:21.553769   17465 logs.go:123] Gathering logs for storage-provisioner [a247b21e5185] ...
	I0318 04:43:21.553780   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a247b21e5185"
	I0318 04:43:21.565374   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:43:21.565386   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:43:21.589176   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:43:21.589181   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:43:21.623925   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:43:21.623932   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:43:21.659982   17465 logs.go:123] Gathering logs for etcd [704a79c3c784] ...
	I0318 04:43:21.659992   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 704a79c3c784"
	I0318 04:43:21.674153   17465 logs.go:123] Gathering logs for coredns [fe69be91e435] ...
	I0318 04:43:21.674166   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe69be91e435"
	I0318 04:43:21.685389   17465 logs.go:123] Gathering logs for coredns [828d4a376c7e] ...
	I0318 04:43:21.685402   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828d4a376c7e"
	I0318 04:43:21.696983   17465 logs.go:123] Gathering logs for kube-scheduler [e9fc948a1004] ...
	I0318 04:43:21.696993   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fc948a1004"
	I0318 04:43:21.716950   17465 logs.go:123] Gathering logs for kube-controller-manager [1c9856b2b94f] ...
	I0318 04:43:21.716959   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c9856b2b94f"
	I0318 04:43:21.739103   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:43:21.739112   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:43:21.750668   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:43:21.750681   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:43:24.257305   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:43:29.259831   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:43:29.260220   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:43:29.296472   17465 logs.go:276] 1 containers: [0a2982ffb84e]
	I0318 04:43:29.296624   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:43:29.316015   17465 logs.go:276] 1 containers: [704a79c3c784]
	I0318 04:43:29.316113   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:43:29.329355   17465 logs.go:276] 2 containers: [fe69be91e435 828d4a376c7e]
	I0318 04:43:29.329434   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:43:29.341564   17465 logs.go:276] 1 containers: [e9fc948a1004]
	I0318 04:43:29.341634   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:43:29.351831   17465 logs.go:276] 1 containers: [894142fdaac1]
	I0318 04:43:29.351902   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:43:29.363094   17465 logs.go:276] 1 containers: [1c9856b2b94f]
	I0318 04:43:29.363164   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:43:29.378487   17465 logs.go:276] 0 containers: []
	W0318 04:43:29.378501   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:43:29.378555   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:43:29.388860   17465 logs.go:276] 1 containers: [a247b21e5185]
	I0318 04:43:29.388879   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:43:29.388884   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:43:29.393110   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:43:29.393116   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:43:29.428543   17465 logs.go:123] Gathering logs for etcd [704a79c3c784] ...
	I0318 04:43:29.428555   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 704a79c3c784"
	I0318 04:43:29.444231   17465 logs.go:123] Gathering logs for coredns [828d4a376c7e] ...
	I0318 04:43:29.444244   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828d4a376c7e"
	I0318 04:43:29.455913   17465 logs.go:123] Gathering logs for storage-provisioner [a247b21e5185] ...
	I0318 04:43:29.455924   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a247b21e5185"
	I0318 04:43:29.467332   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:43:29.467342   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:43:29.478861   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:43:29.478873   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:43:29.512707   17465 logs.go:123] Gathering logs for kube-apiserver [0a2982ffb84e] ...
	I0318 04:43:29.512715   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2982ffb84e"
	I0318 04:43:29.535672   17465 logs.go:123] Gathering logs for coredns [fe69be91e435] ...
	I0318 04:43:29.535686   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe69be91e435"
	I0318 04:43:29.553270   17465 logs.go:123] Gathering logs for kube-scheduler [e9fc948a1004] ...
	I0318 04:43:29.553280   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fc948a1004"
	I0318 04:43:29.567989   17465 logs.go:123] Gathering logs for kube-proxy [894142fdaac1] ...
	I0318 04:43:29.568003   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 894142fdaac1"
	I0318 04:43:29.580170   17465 logs.go:123] Gathering logs for kube-controller-manager [1c9856b2b94f] ...
	I0318 04:43:29.580183   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c9856b2b94f"
	I0318 04:43:29.597584   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:43:29.597614   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:43:32.124389   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:43:37.126386   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:43:37.126560   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:43:37.150146   17465 logs.go:276] 1 containers: [0a2982ffb84e]
	I0318 04:43:37.150258   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:43:37.164104   17465 logs.go:276] 1 containers: [704a79c3c784]
	I0318 04:43:37.164176   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:43:37.175992   17465 logs.go:276] 2 containers: [fe69be91e435 828d4a376c7e]
	I0318 04:43:37.176060   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:43:37.187219   17465 logs.go:276] 1 containers: [e9fc948a1004]
	I0318 04:43:37.187286   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:43:37.197880   17465 logs.go:276] 1 containers: [894142fdaac1]
	I0318 04:43:37.197949   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:43:37.212198   17465 logs.go:276] 1 containers: [1c9856b2b94f]
	I0318 04:43:37.212266   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:43:37.223707   17465 logs.go:276] 0 containers: []
	W0318 04:43:37.223723   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:43:37.223784   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:43:37.233536   17465 logs.go:276] 1 containers: [a247b21e5185]
	I0318 04:43:37.233554   17465 logs.go:123] Gathering logs for coredns [fe69be91e435] ...
	I0318 04:43:37.233559   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe69be91e435"
	I0318 04:43:37.247205   17465 logs.go:123] Gathering logs for storage-provisioner [a247b21e5185] ...
	I0318 04:43:37.247215   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a247b21e5185"
	I0318 04:43:37.258603   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:43:37.258617   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:43:37.293939   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:43:37.293948   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:43:37.298110   17465 logs.go:123] Gathering logs for kube-apiserver [0a2982ffb84e] ...
	I0318 04:43:37.298118   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2982ffb84e"
	I0318 04:43:37.312044   17465 logs.go:123] Gathering logs for kube-scheduler [e9fc948a1004] ...
	I0318 04:43:37.312056   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fc948a1004"
	I0318 04:43:37.327106   17465 logs.go:123] Gathering logs for kube-proxy [894142fdaac1] ...
	I0318 04:43:37.327118   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 894142fdaac1"
	I0318 04:43:37.338843   17465 logs.go:123] Gathering logs for kube-controller-manager [1c9856b2b94f] ...
	I0318 04:43:37.338856   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c9856b2b94f"
	I0318 04:43:37.356525   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:43:37.356538   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:43:37.381045   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:43:37.381055   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:43:37.392298   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:43:37.392310   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:43:37.428069   17465 logs.go:123] Gathering logs for etcd [704a79c3c784] ...
	I0318 04:43:37.428080   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 704a79c3c784"
	I0318 04:43:37.442550   17465 logs.go:123] Gathering logs for coredns [828d4a376c7e] ...
	I0318 04:43:37.442561   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828d4a376c7e"
	I0318 04:43:39.956629   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:43:44.958817   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:43:44.959186   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:43:44.995509   17465 logs.go:276] 1 containers: [0a2982ffb84e]
	I0318 04:43:44.995638   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:43:45.012332   17465 logs.go:276] 1 containers: [704a79c3c784]
	I0318 04:43:45.012410   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:43:45.026119   17465 logs.go:276] 2 containers: [fe69be91e435 828d4a376c7e]
	I0318 04:43:45.026200   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:43:45.037521   17465 logs.go:276] 1 containers: [e9fc948a1004]
	I0318 04:43:45.037589   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:43:45.047534   17465 logs.go:276] 1 containers: [894142fdaac1]
	I0318 04:43:45.047601   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:43:45.058482   17465 logs.go:276] 1 containers: [1c9856b2b94f]
	I0318 04:43:45.058553   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:43:45.068807   17465 logs.go:276] 0 containers: []
	W0318 04:43:45.068818   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:43:45.068876   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:43:45.078890   17465 logs.go:276] 1 containers: [a247b21e5185]
	I0318 04:43:45.078908   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:43:45.078912   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:43:45.103234   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:43:45.103241   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:43:45.114444   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:43:45.114453   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:43:45.149558   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:43:45.149575   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:43:45.153821   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:43:45.153827   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:43:45.187568   17465 logs.go:123] Gathering logs for coredns [fe69be91e435] ...
	I0318 04:43:45.187579   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe69be91e435"
	I0318 04:43:45.199394   17465 logs.go:123] Gathering logs for kube-controller-manager [1c9856b2b94f] ...
	I0318 04:43:45.199404   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c9856b2b94f"
	I0318 04:43:45.216801   17465 logs.go:123] Gathering logs for storage-provisioner [a247b21e5185] ...
	I0318 04:43:45.216811   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a247b21e5185"
	I0318 04:43:45.228118   17465 logs.go:123] Gathering logs for kube-apiserver [0a2982ffb84e] ...
	I0318 04:43:45.228131   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2982ffb84e"
	I0318 04:43:45.242159   17465 logs.go:123] Gathering logs for etcd [704a79c3c784] ...
	I0318 04:43:45.242171   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 704a79c3c784"
	I0318 04:43:45.256225   17465 logs.go:123] Gathering logs for coredns [828d4a376c7e] ...
	I0318 04:43:45.256236   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828d4a376c7e"
	I0318 04:43:45.268208   17465 logs.go:123] Gathering logs for kube-scheduler [e9fc948a1004] ...
	I0318 04:43:45.268221   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fc948a1004"
	I0318 04:43:45.283256   17465 logs.go:123] Gathering logs for kube-proxy [894142fdaac1] ...
	I0318 04:43:45.283267   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 894142fdaac1"
	I0318 04:43:47.797669   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:43:52.800143   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:43:52.800633   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:43:52.841734   17465 logs.go:276] 1 containers: [0a2982ffb84e]
	I0318 04:43:52.841865   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:43:52.864394   17465 logs.go:276] 1 containers: [704a79c3c784]
	I0318 04:43:52.864509   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:43:52.879817   17465 logs.go:276] 2 containers: [fe69be91e435 828d4a376c7e]
	I0318 04:43:52.879897   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:43:52.893887   17465 logs.go:276] 1 containers: [e9fc948a1004]
	I0318 04:43:52.893963   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:43:52.904653   17465 logs.go:276] 1 containers: [894142fdaac1]
	I0318 04:43:52.904721   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:43:52.919994   17465 logs.go:276] 1 containers: [1c9856b2b94f]
	I0318 04:43:52.920063   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:43:52.930681   17465 logs.go:276] 0 containers: []
	W0318 04:43:52.930693   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:43:52.930754   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:43:52.941133   17465 logs.go:276] 1 containers: [a247b21e5185]
	I0318 04:43:52.941148   17465 logs.go:123] Gathering logs for coredns [fe69be91e435] ...
	I0318 04:43:52.941154   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe69be91e435"
	I0318 04:43:52.952567   17465 logs.go:123] Gathering logs for kube-scheduler [e9fc948a1004] ...
	I0318 04:43:52.952579   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fc948a1004"
	I0318 04:43:52.968194   17465 logs.go:123] Gathering logs for kube-controller-manager [1c9856b2b94f] ...
	I0318 04:43:52.968205   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c9856b2b94f"
	I0318 04:43:52.985828   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:43:52.985836   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:43:53.010559   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:43:53.010566   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:43:53.014954   17465 logs.go:123] Gathering logs for kube-apiserver [0a2982ffb84e] ...
	I0318 04:43:53.014961   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2982ffb84e"
	I0318 04:43:53.030007   17465 logs.go:123] Gathering logs for etcd [704a79c3c784] ...
	I0318 04:43:53.030017   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 704a79c3c784"
	I0318 04:43:53.043909   17465 logs.go:123] Gathering logs for coredns [828d4a376c7e] ...
	I0318 04:43:53.043920   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828d4a376c7e"
	I0318 04:43:53.055487   17465 logs.go:123] Gathering logs for kube-proxy [894142fdaac1] ...
	I0318 04:43:53.055500   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 894142fdaac1"
	I0318 04:43:53.066806   17465 logs.go:123] Gathering logs for storage-provisioner [a247b21e5185] ...
	I0318 04:43:53.066820   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a247b21e5185"
	I0318 04:43:53.078650   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:43:53.078663   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:43:53.091812   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:43:53.091824   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:43:53.125974   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:43:53.125984   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:43:55.661395   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:44:00.662247   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:44:00.662709   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:44:00.700678   17465 logs.go:276] 1 containers: [0a2982ffb84e]
	I0318 04:44:00.700802   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:44:00.721220   17465 logs.go:276] 1 containers: [704a79c3c784]
	I0318 04:44:00.721335   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:44:00.737793   17465 logs.go:276] 4 containers: [43b9c6cc9a8a 0d30a592f036 fe69be91e435 828d4a376c7e]
	I0318 04:44:00.737892   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:44:00.750361   17465 logs.go:276] 1 containers: [e9fc948a1004]
	I0318 04:44:00.750430   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:44:00.766566   17465 logs.go:276] 1 containers: [894142fdaac1]
	I0318 04:44:00.766634   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:44:00.777316   17465 logs.go:276] 1 containers: [1c9856b2b94f]
	I0318 04:44:00.777381   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:44:00.787174   17465 logs.go:276] 0 containers: []
	W0318 04:44:00.787184   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:44:00.787233   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:44:00.797533   17465 logs.go:276] 1 containers: [a247b21e5185]
	I0318 04:44:00.797554   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:44:00.797558   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:44:00.831897   17465 logs.go:123] Gathering logs for coredns [828d4a376c7e] ...
	I0318 04:44:00.831904   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828d4a376c7e"
	I0318 04:44:00.843626   17465 logs.go:123] Gathering logs for kube-controller-manager [1c9856b2b94f] ...
	I0318 04:44:00.843638   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c9856b2b94f"
	I0318 04:44:00.863521   17465 logs.go:123] Gathering logs for storage-provisioner [a247b21e5185] ...
	I0318 04:44:00.863531   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a247b21e5185"
	I0318 04:44:00.875188   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:44:00.875199   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:44:00.886668   17465 logs.go:123] Gathering logs for kube-apiserver [0a2982ffb84e] ...
	I0318 04:44:00.886678   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2982ffb84e"
	I0318 04:44:00.901184   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:44:00.901194   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:44:00.925702   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:44:00.925712   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:44:00.965160   17465 logs.go:123] Gathering logs for etcd [704a79c3c784] ...
	I0318 04:44:00.965172   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 704a79c3c784"
	I0318 04:44:00.980621   17465 logs.go:123] Gathering logs for coredns [43b9c6cc9a8a] ...
	I0318 04:44:00.980632   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43b9c6cc9a8a"
	I0318 04:44:00.991843   17465 logs.go:123] Gathering logs for coredns [0d30a592f036] ...
	I0318 04:44:00.991854   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d30a592f036"
	I0318 04:44:01.002971   17465 logs.go:123] Gathering logs for coredns [fe69be91e435] ...
	I0318 04:44:01.002981   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe69be91e435"
	I0318 04:44:01.014838   17465 logs.go:123] Gathering logs for kube-scheduler [e9fc948a1004] ...
	I0318 04:44:01.014848   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fc948a1004"
	I0318 04:44:01.032675   17465 logs.go:123] Gathering logs for kube-proxy [894142fdaac1] ...
	I0318 04:44:01.032687   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 894142fdaac1"
	I0318 04:44:01.043966   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:44:01.043975   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:44:03.550181   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:44:08.552646   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:44:08.552978   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:44:08.594012   17465 logs.go:276] 1 containers: [0a2982ffb84e]
	I0318 04:44:08.594164   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:44:08.616883   17465 logs.go:276] 1 containers: [704a79c3c784]
	I0318 04:44:08.616991   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:44:08.632477   17465 logs.go:276] 4 containers: [43b9c6cc9a8a 0d30a592f036 fe69be91e435 828d4a376c7e]
	I0318 04:44:08.632565   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:44:08.647595   17465 logs.go:276] 1 containers: [e9fc948a1004]
	I0318 04:44:08.647662   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:44:08.658290   17465 logs.go:276] 1 containers: [894142fdaac1]
	I0318 04:44:08.658356   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:44:08.669039   17465 logs.go:276] 1 containers: [1c9856b2b94f]
	I0318 04:44:08.669105   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:44:08.679583   17465 logs.go:276] 0 containers: []
	W0318 04:44:08.679596   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:44:08.679656   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:44:08.700734   17465 logs.go:276] 1 containers: [a247b21e5185]
	I0318 04:44:08.700752   17465 logs.go:123] Gathering logs for coredns [828d4a376c7e] ...
	I0318 04:44:08.700757   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828d4a376c7e"
	I0318 04:44:08.712474   17465 logs.go:123] Gathering logs for kube-controller-manager [1c9856b2b94f] ...
	I0318 04:44:08.712485   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c9856b2b94f"
	I0318 04:44:08.737878   17465 logs.go:123] Gathering logs for storage-provisioner [a247b21e5185] ...
	I0318 04:44:08.737886   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a247b21e5185"
	I0318 04:44:08.749596   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:44:08.749606   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:44:08.783012   17465 logs.go:123] Gathering logs for coredns [fe69be91e435] ...
	I0318 04:44:08.783020   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe69be91e435"
	I0318 04:44:08.794751   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:44:08.794760   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:44:08.819904   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:44:08.819911   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:44:08.833311   17465 logs.go:123] Gathering logs for coredns [43b9c6cc9a8a] ...
	I0318 04:44:08.833322   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43b9c6cc9a8a"
	I0318 04:44:08.844592   17465 logs.go:123] Gathering logs for coredns [0d30a592f036] ...
	I0318 04:44:08.844606   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d30a592f036"
	I0318 04:44:08.855713   17465 logs.go:123] Gathering logs for etcd [704a79c3c784] ...
	I0318 04:44:08.855722   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 704a79c3c784"
	I0318 04:44:08.869137   17465 logs.go:123] Gathering logs for kube-scheduler [e9fc948a1004] ...
	I0318 04:44:08.869148   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fc948a1004"
	I0318 04:44:08.884473   17465 logs.go:123] Gathering logs for kube-proxy [894142fdaac1] ...
	I0318 04:44:08.884484   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 894142fdaac1"
	I0318 04:44:08.897120   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:44:08.897130   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:44:08.901387   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:44:08.901397   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:44:08.936416   17465 logs.go:123] Gathering logs for kube-apiserver [0a2982ffb84e] ...
	I0318 04:44:08.936429   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2982ffb84e"
	I0318 04:44:11.452952   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:44:16.455573   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:44:16.456054   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:44:16.496950   17465 logs.go:276] 1 containers: [0a2982ffb84e]
	I0318 04:44:16.497074   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:44:16.519901   17465 logs.go:276] 1 containers: [704a79c3c784]
	I0318 04:44:16.520011   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:44:16.535562   17465 logs.go:276] 4 containers: [43b9c6cc9a8a 0d30a592f036 fe69be91e435 828d4a376c7e]
	I0318 04:44:16.535644   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:44:16.548544   17465 logs.go:276] 1 containers: [e9fc948a1004]
	I0318 04:44:16.548619   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:44:16.559635   17465 logs.go:276] 1 containers: [894142fdaac1]
	I0318 04:44:16.559707   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:44:16.570654   17465 logs.go:276] 1 containers: [1c9856b2b94f]
	I0318 04:44:16.570716   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:44:16.580949   17465 logs.go:276] 0 containers: []
	W0318 04:44:16.580960   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:44:16.581014   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:44:16.591454   17465 logs.go:276] 1 containers: [a247b21e5185]
	I0318 04:44:16.591470   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:44:16.591476   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:44:16.625244   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:44:16.625255   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:44:16.629824   17465 logs.go:123] Gathering logs for coredns [0d30a592f036] ...
	I0318 04:44:16.629832   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d30a592f036"
	I0318 04:44:16.640944   17465 logs.go:123] Gathering logs for coredns [828d4a376c7e] ...
	I0318 04:44:16.640955   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828d4a376c7e"
	I0318 04:44:16.652945   17465 logs.go:123] Gathering logs for kube-scheduler [e9fc948a1004] ...
	I0318 04:44:16.652956   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fc948a1004"
	I0318 04:44:16.669652   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:44:16.669664   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:44:16.683626   17465 logs.go:123] Gathering logs for kube-apiserver [0a2982ffb84e] ...
	I0318 04:44:16.683636   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2982ffb84e"
	I0318 04:44:16.698333   17465 logs.go:123] Gathering logs for etcd [704a79c3c784] ...
	I0318 04:44:16.698344   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 704a79c3c784"
	I0318 04:44:16.712710   17465 logs.go:123] Gathering logs for kube-controller-manager [1c9856b2b94f] ...
	I0318 04:44:16.712722   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c9856b2b94f"
	I0318 04:44:16.733267   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:44:16.733275   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:44:16.767952   17465 logs.go:123] Gathering logs for kube-proxy [894142fdaac1] ...
	I0318 04:44:16.767965   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 894142fdaac1"
	I0318 04:44:16.779759   17465 logs.go:123] Gathering logs for coredns [43b9c6cc9a8a] ...
	I0318 04:44:16.779772   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43b9c6cc9a8a"
	I0318 04:44:16.791166   17465 logs.go:123] Gathering logs for coredns [fe69be91e435] ...
	I0318 04:44:16.791179   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe69be91e435"
	I0318 04:44:16.802602   17465 logs.go:123] Gathering logs for storage-provisioner [a247b21e5185] ...
	I0318 04:44:16.802616   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a247b21e5185"
	I0318 04:44:16.814067   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:44:16.814080   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:44:19.340501   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:44:24.342777   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:44:24.343006   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:44:24.370254   17465 logs.go:276] 1 containers: [0a2982ffb84e]
	I0318 04:44:24.370373   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:44:24.388040   17465 logs.go:276] 1 containers: [704a79c3c784]
	I0318 04:44:24.388122   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:44:24.401722   17465 logs.go:276] 4 containers: [43b9c6cc9a8a 0d30a592f036 fe69be91e435 828d4a376c7e]
	I0318 04:44:24.401796   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:44:24.413627   17465 logs.go:276] 1 containers: [e9fc948a1004]
	I0318 04:44:24.413692   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:44:24.424014   17465 logs.go:276] 1 containers: [894142fdaac1]
	I0318 04:44:24.424077   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:44:24.434304   17465 logs.go:276] 1 containers: [1c9856b2b94f]
	I0318 04:44:24.434364   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:44:24.444315   17465 logs.go:276] 0 containers: []
	W0318 04:44:24.444326   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:44:24.444384   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:44:24.454668   17465 logs.go:276] 1 containers: [a247b21e5185]
	I0318 04:44:24.454687   17465 logs.go:123] Gathering logs for kube-proxy [894142fdaac1] ...
	I0318 04:44:24.454693   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 894142fdaac1"
	I0318 04:44:24.466585   17465 logs.go:123] Gathering logs for kube-controller-manager [1c9856b2b94f] ...
	I0318 04:44:24.466597   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c9856b2b94f"
	I0318 04:44:24.483741   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:44:24.483751   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:44:24.507269   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:44:24.507276   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:44:24.540922   17465 logs.go:123] Gathering logs for etcd [704a79c3c784] ...
	I0318 04:44:24.540930   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 704a79c3c784"
	I0318 04:44:24.554013   17465 logs.go:123] Gathering logs for coredns [fe69be91e435] ...
	I0318 04:44:24.554023   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe69be91e435"
	I0318 04:44:24.565558   17465 logs.go:123] Gathering logs for kube-scheduler [e9fc948a1004] ...
	I0318 04:44:24.565568   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fc948a1004"
	I0318 04:44:24.580262   17465 logs.go:123] Gathering logs for storage-provisioner [a247b21e5185] ...
	I0318 04:44:24.580272   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a247b21e5185"
	I0318 04:44:24.591410   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:44:24.591419   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:44:24.602747   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:44:24.602758   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:44:24.607505   17465 logs.go:123] Gathering logs for kube-apiserver [0a2982ffb84e] ...
	I0318 04:44:24.607515   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2982ffb84e"
	I0318 04:44:24.621814   17465 logs.go:123] Gathering logs for coredns [43b9c6cc9a8a] ...
	I0318 04:44:24.621824   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43b9c6cc9a8a"
	I0318 04:44:24.633181   17465 logs.go:123] Gathering logs for coredns [828d4a376c7e] ...
	I0318 04:44:24.633191   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828d4a376c7e"
	I0318 04:44:24.646308   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:44:24.646317   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:44:24.681855   17465 logs.go:123] Gathering logs for coredns [0d30a592f036] ...
	I0318 04:44:24.681867   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d30a592f036"
	I0318 04:44:27.195002   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:44:32.196500   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:44:32.196881   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:44:32.235979   17465 logs.go:276] 1 containers: [0a2982ffb84e]
	I0318 04:44:32.236114   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:44:32.273767   17465 logs.go:276] 1 containers: [704a79c3c784]
	I0318 04:44:32.273843   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:44:32.286462   17465 logs.go:276] 4 containers: [43b9c6cc9a8a 0d30a592f036 fe69be91e435 828d4a376c7e]
	I0318 04:44:32.286561   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:44:32.297574   17465 logs.go:276] 1 containers: [e9fc948a1004]
	I0318 04:44:32.297636   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:44:32.309567   17465 logs.go:276] 1 containers: [894142fdaac1]
	I0318 04:44:32.309625   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:44:32.322880   17465 logs.go:276] 1 containers: [1c9856b2b94f]
	I0318 04:44:32.322957   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:44:32.332777   17465 logs.go:276] 0 containers: []
	W0318 04:44:32.332789   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:44:32.332845   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:44:32.343462   17465 logs.go:276] 1 containers: [a247b21e5185]
	I0318 04:44:32.343480   17465 logs.go:123] Gathering logs for etcd [704a79c3c784] ...
	I0318 04:44:32.343500   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 704a79c3c784"
	I0318 04:44:32.358912   17465 logs.go:123] Gathering logs for coredns [fe69be91e435] ...
	I0318 04:44:32.358926   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe69be91e435"
	I0318 04:44:32.370427   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:44:32.370448   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:44:32.405282   17465 logs.go:123] Gathering logs for coredns [828d4a376c7e] ...
	I0318 04:44:32.405297   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828d4a376c7e"
	I0318 04:44:32.417045   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:44:32.417057   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:44:32.440470   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:44:32.440479   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:44:32.444326   17465 logs.go:123] Gathering logs for coredns [43b9c6cc9a8a] ...
	I0318 04:44:32.444335   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43b9c6cc9a8a"
	I0318 04:44:32.455788   17465 logs.go:123] Gathering logs for kube-controller-manager [1c9856b2b94f] ...
	I0318 04:44:32.455799   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c9856b2b94f"
	I0318 04:44:32.473655   17465 logs.go:123] Gathering logs for storage-provisioner [a247b21e5185] ...
	I0318 04:44:32.473666   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a247b21e5185"
	I0318 04:44:32.489858   17465 logs.go:123] Gathering logs for kube-proxy [894142fdaac1] ...
	I0318 04:44:32.489866   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 894142fdaac1"
	I0318 04:44:32.501465   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:44:32.501476   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:44:32.513475   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:44:32.513489   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:44:32.548199   17465 logs.go:123] Gathering logs for kube-apiserver [0a2982ffb84e] ...
	I0318 04:44:32.548205   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2982ffb84e"
	I0318 04:44:32.562703   17465 logs.go:123] Gathering logs for coredns [0d30a592f036] ...
	I0318 04:44:32.562713   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d30a592f036"
	I0318 04:44:32.574775   17465 logs.go:123] Gathering logs for kube-scheduler [e9fc948a1004] ...
	I0318 04:44:32.574784   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fc948a1004"
	I0318 04:44:35.092298   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:44:40.094319   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:44:40.094529   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:44:40.109349   17465 logs.go:276] 1 containers: [0a2982ffb84e]
	I0318 04:44:40.109429   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:44:40.120997   17465 logs.go:276] 1 containers: [704a79c3c784]
	I0318 04:44:40.121067   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:44:40.131665   17465 logs.go:276] 4 containers: [43b9c6cc9a8a 0d30a592f036 fe69be91e435 828d4a376c7e]
	I0318 04:44:40.131738   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:44:40.142088   17465 logs.go:276] 1 containers: [e9fc948a1004]
	I0318 04:44:40.142152   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:44:40.152380   17465 logs.go:276] 1 containers: [894142fdaac1]
	I0318 04:44:40.152443   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:44:40.163026   17465 logs.go:276] 1 containers: [1c9856b2b94f]
	I0318 04:44:40.163092   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:44:40.173790   17465 logs.go:276] 0 containers: []
	W0318 04:44:40.173801   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:44:40.173856   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:44:40.211124   17465 logs.go:276] 1 containers: [a247b21e5185]
	I0318 04:44:40.211142   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:44:40.211148   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:44:40.247426   17465 logs.go:123] Gathering logs for kube-apiserver [0a2982ffb84e] ...
	I0318 04:44:40.247437   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2982ffb84e"
	I0318 04:44:40.261519   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:44:40.261532   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:44:40.266198   17465 logs.go:123] Gathering logs for coredns [43b9c6cc9a8a] ...
	I0318 04:44:40.266207   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43b9c6cc9a8a"
	I0318 04:44:40.277884   17465 logs.go:123] Gathering logs for kube-proxy [894142fdaac1] ...
	I0318 04:44:40.277898   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 894142fdaac1"
	I0318 04:44:40.289734   17465 logs.go:123] Gathering logs for kube-controller-manager [1c9856b2b94f] ...
	I0318 04:44:40.289747   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c9856b2b94f"
	I0318 04:44:40.311217   17465 logs.go:123] Gathering logs for storage-provisioner [a247b21e5185] ...
	I0318 04:44:40.311229   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a247b21e5185"
	I0318 04:44:40.325369   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:44:40.325380   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:44:40.360658   17465 logs.go:123] Gathering logs for coredns [828d4a376c7e] ...
	I0318 04:44:40.360665   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828d4a376c7e"
	I0318 04:44:40.372403   17465 logs.go:123] Gathering logs for kube-scheduler [e9fc948a1004] ...
	I0318 04:44:40.372412   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fc948a1004"
	I0318 04:44:40.387876   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:44:40.387887   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:44:40.411504   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:44:40.411516   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:44:40.423160   17465 logs.go:123] Gathering logs for etcd [704a79c3c784] ...
	I0318 04:44:40.423171   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 704a79c3c784"
	I0318 04:44:40.437968   17465 logs.go:123] Gathering logs for coredns [0d30a592f036] ...
	I0318 04:44:40.437979   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d30a592f036"
	I0318 04:44:40.449142   17465 logs.go:123] Gathering logs for coredns [fe69be91e435] ...
	I0318 04:44:40.449153   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe69be91e435"
	I0318 04:44:42.963073   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:44:47.965272   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:44:47.965689   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:44:48.002639   17465 logs.go:276] 1 containers: [0a2982ffb84e]
	I0318 04:44:48.002767   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:44:48.027981   17465 logs.go:276] 1 containers: [704a79c3c784]
	I0318 04:44:48.028071   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:44:48.041572   17465 logs.go:276] 4 containers: [43b9c6cc9a8a 0d30a592f036 fe69be91e435 828d4a376c7e]
	I0318 04:44:48.041651   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:44:48.059589   17465 logs.go:276] 1 containers: [e9fc948a1004]
	I0318 04:44:48.059649   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:44:48.070106   17465 logs.go:276] 1 containers: [894142fdaac1]
	I0318 04:44:48.070167   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:44:48.080206   17465 logs.go:276] 1 containers: [1c9856b2b94f]
	I0318 04:44:48.080278   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:44:48.090646   17465 logs.go:276] 0 containers: []
	W0318 04:44:48.090656   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:44:48.090715   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:44:48.101324   17465 logs.go:276] 1 containers: [a247b21e5185]
	I0318 04:44:48.101343   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:44:48.101349   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:44:48.135656   17465 logs.go:123] Gathering logs for coredns [0d30a592f036] ...
	I0318 04:44:48.135669   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d30a592f036"
	I0318 04:44:48.147619   17465 logs.go:123] Gathering logs for kube-scheduler [e9fc948a1004] ...
	I0318 04:44:48.147632   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fc948a1004"
	I0318 04:44:48.166215   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:44:48.166232   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:44:48.170892   17465 logs.go:123] Gathering logs for kube-proxy [894142fdaac1] ...
	I0318 04:44:48.170897   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 894142fdaac1"
	I0318 04:44:48.182088   17465 logs.go:123] Gathering logs for storage-provisioner [a247b21e5185] ...
	I0318 04:44:48.182101   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a247b21e5185"
	I0318 04:44:48.193412   17465 logs.go:123] Gathering logs for coredns [828d4a376c7e] ...
	I0318 04:44:48.193421   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828d4a376c7e"
	I0318 04:44:48.205135   17465 logs.go:123] Gathering logs for coredns [43b9c6cc9a8a] ...
	I0318 04:44:48.205149   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43b9c6cc9a8a"
	I0318 04:44:48.216521   17465 logs.go:123] Gathering logs for kube-apiserver [0a2982ffb84e] ...
	I0318 04:44:48.216534   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2982ffb84e"
	I0318 04:44:48.230405   17465 logs.go:123] Gathering logs for etcd [704a79c3c784] ...
	I0318 04:44:48.230418   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 704a79c3c784"
	I0318 04:44:48.244181   17465 logs.go:123] Gathering logs for coredns [fe69be91e435] ...
	I0318 04:44:48.244190   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe69be91e435"
	I0318 04:44:48.255946   17465 logs.go:123] Gathering logs for kube-controller-manager [1c9856b2b94f] ...
	I0318 04:44:48.255957   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c9856b2b94f"
	I0318 04:44:48.274031   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:44:48.274043   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:44:48.300260   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:44:48.300269   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:44:48.312360   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:44:48.312370   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:44:50.848813   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:44:55.851374   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:44:55.851655   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:44:55.877721   17465 logs.go:276] 1 containers: [0a2982ffb84e]
	I0318 04:44:55.877818   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:44:55.897514   17465 logs.go:276] 1 containers: [704a79c3c784]
	I0318 04:44:55.897589   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:44:55.909438   17465 logs.go:276] 4 containers: [43b9c6cc9a8a 0d30a592f036 fe69be91e435 828d4a376c7e]
	I0318 04:44:55.909515   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:44:55.919845   17465 logs.go:276] 1 containers: [e9fc948a1004]
	I0318 04:44:55.919925   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:44:55.930575   17465 logs.go:276] 1 containers: [894142fdaac1]
	I0318 04:44:55.930644   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:44:55.942377   17465 logs.go:276] 1 containers: [1c9856b2b94f]
	I0318 04:44:55.942443   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:44:55.952650   17465 logs.go:276] 0 containers: []
	W0318 04:44:55.952660   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:44:55.952716   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:44:55.963349   17465 logs.go:276] 1 containers: [a247b21e5185]
	I0318 04:44:55.963367   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:44:55.963372   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:44:56.003531   17465 logs.go:123] Gathering logs for coredns [fe69be91e435] ...
	I0318 04:44:56.003542   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe69be91e435"
	I0318 04:44:56.014818   17465 logs.go:123] Gathering logs for storage-provisioner [a247b21e5185] ...
	I0318 04:44:56.014832   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a247b21e5185"
	I0318 04:44:56.032543   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:44:56.032553   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:44:56.057302   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:44:56.057312   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:44:56.069100   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:44:56.069111   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:44:56.074001   17465 logs.go:123] Gathering logs for kube-apiserver [0a2982ffb84e] ...
	I0318 04:44:56.074011   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2982ffb84e"
	I0318 04:44:56.091468   17465 logs.go:123] Gathering logs for etcd [704a79c3c784] ...
	I0318 04:44:56.091480   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 704a79c3c784"
	I0318 04:44:56.105107   17465 logs.go:123] Gathering logs for kube-controller-manager [1c9856b2b94f] ...
	I0318 04:44:56.105119   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c9856b2b94f"
	I0318 04:44:56.122514   17465 logs.go:123] Gathering logs for coredns [43b9c6cc9a8a] ...
	I0318 04:44:56.122525   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43b9c6cc9a8a"
	I0318 04:44:56.134753   17465 logs.go:123] Gathering logs for coredns [0d30a592f036] ...
	I0318 04:44:56.134766   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d30a592f036"
	I0318 04:44:56.152677   17465 logs.go:123] Gathering logs for kube-scheduler [e9fc948a1004] ...
	I0318 04:44:56.152687   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fc948a1004"
	I0318 04:44:56.175460   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:44:56.175473   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:44:56.209082   17465 logs.go:123] Gathering logs for coredns [828d4a376c7e] ...
	I0318 04:44:56.209090   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828d4a376c7e"
	I0318 04:44:56.220765   17465 logs.go:123] Gathering logs for kube-proxy [894142fdaac1] ...
	I0318 04:44:56.220778   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 894142fdaac1"
	I0318 04:44:58.733904   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:45:03.735959   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:45:03.736387   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:45:03.771638   17465 logs.go:276] 1 containers: [0a2982ffb84e]
	I0318 04:45:03.771770   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:45:03.794212   17465 logs.go:276] 1 containers: [704a79c3c784]
	I0318 04:45:03.794308   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:45:03.808958   17465 logs.go:276] 4 containers: [43b9c6cc9a8a 0d30a592f036 fe69be91e435 828d4a376c7e]
	I0318 04:45:03.809042   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:45:03.821769   17465 logs.go:276] 1 containers: [e9fc948a1004]
	I0318 04:45:03.821833   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:45:03.832894   17465 logs.go:276] 1 containers: [894142fdaac1]
	I0318 04:45:03.832962   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:45:03.843925   17465 logs.go:276] 1 containers: [1c9856b2b94f]
	I0318 04:45:03.843989   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:45:03.854207   17465 logs.go:276] 0 containers: []
	W0318 04:45:03.854218   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:45:03.854273   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:45:03.864777   17465 logs.go:276] 1 containers: [a247b21e5185]
	I0318 04:45:03.864796   17465 logs.go:123] Gathering logs for coredns [828d4a376c7e] ...
	I0318 04:45:03.864801   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828d4a376c7e"
	I0318 04:45:03.876468   17465 logs.go:123] Gathering logs for kube-controller-manager [1c9856b2b94f] ...
	I0318 04:45:03.876481   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c9856b2b94f"
	I0318 04:45:03.896376   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:45:03.896386   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:45:03.920706   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:45:03.920715   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:45:03.933552   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:45:03.933565   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:45:03.969914   17465 logs.go:123] Gathering logs for kube-apiserver [0a2982ffb84e] ...
	I0318 04:45:03.969923   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2982ffb84e"
	I0318 04:45:03.984628   17465 logs.go:123] Gathering logs for coredns [fe69be91e435] ...
	I0318 04:45:03.984639   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe69be91e435"
	I0318 04:45:03.997127   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:45:03.997138   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:45:04.032893   17465 logs.go:123] Gathering logs for etcd [704a79c3c784] ...
	I0318 04:45:04.032908   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 704a79c3c784"
	I0318 04:45:04.047382   17465 logs.go:123] Gathering logs for coredns [43b9c6cc9a8a] ...
	I0318 04:45:04.047391   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43b9c6cc9a8a"
	I0318 04:45:04.059058   17465 logs.go:123] Gathering logs for kube-scheduler [e9fc948a1004] ...
	I0318 04:45:04.059069   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fc948a1004"
	I0318 04:45:04.074399   17465 logs.go:123] Gathering logs for storage-provisioner [a247b21e5185] ...
	I0318 04:45:04.074413   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a247b21e5185"
	I0318 04:45:04.086471   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:45:04.086483   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:45:04.090868   17465 logs.go:123] Gathering logs for coredns [0d30a592f036] ...
	I0318 04:45:04.090877   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d30a592f036"
	I0318 04:45:04.102800   17465 logs.go:123] Gathering logs for kube-proxy [894142fdaac1] ...
	I0318 04:45:04.102808   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 894142fdaac1"
	I0318 04:45:06.617313   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:45:11.619674   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:45:11.619756   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:45:11.631738   17465 logs.go:276] 1 containers: [0a2982ffb84e]
	I0318 04:45:11.631814   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:45:11.642809   17465 logs.go:276] 1 containers: [704a79c3c784]
	I0318 04:45:11.642866   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:45:11.655539   17465 logs.go:276] 4 containers: [43b9c6cc9a8a 0d30a592f036 fe69be91e435 828d4a376c7e]
	I0318 04:45:11.655596   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:45:11.667740   17465 logs.go:276] 1 containers: [e9fc948a1004]
	I0318 04:45:11.667803   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:45:11.679710   17465 logs.go:276] 1 containers: [894142fdaac1]
	I0318 04:45:11.679778   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:45:11.700956   17465 logs.go:276] 1 containers: [1c9856b2b94f]
	I0318 04:45:11.701025   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:45:11.712797   17465 logs.go:276] 0 containers: []
	W0318 04:45:11.712809   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:45:11.712884   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:45:11.724308   17465 logs.go:276] 1 containers: [a247b21e5185]
	I0318 04:45:11.724329   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:45:11.724335   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:45:11.729113   17465 logs.go:123] Gathering logs for kube-scheduler [e9fc948a1004] ...
	I0318 04:45:11.729125   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fc948a1004"
	I0318 04:45:11.745760   17465 logs.go:123] Gathering logs for kube-controller-manager [1c9856b2b94f] ...
	I0318 04:45:11.745772   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c9856b2b94f"
	I0318 04:45:11.763682   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:45:11.763691   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:45:11.775610   17465 logs.go:123] Gathering logs for etcd [704a79c3c784] ...
	I0318 04:45:11.775621   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 704a79c3c784"
	I0318 04:45:11.791628   17465 logs.go:123] Gathering logs for coredns [0d30a592f036] ...
	I0318 04:45:11.791654   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d30a592f036"
	I0318 04:45:11.804957   17465 logs.go:123] Gathering logs for coredns [828d4a376c7e] ...
	I0318 04:45:11.804966   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828d4a376c7e"
	I0318 04:45:11.816877   17465 logs.go:123] Gathering logs for storage-provisioner [a247b21e5185] ...
	I0318 04:45:11.816888   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a247b21e5185"
	I0318 04:45:11.829164   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:45:11.829174   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:45:11.852413   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:45:11.852421   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:45:11.889297   17465 logs.go:123] Gathering logs for coredns [43b9c6cc9a8a] ...
	I0318 04:45:11.889307   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43b9c6cc9a8a"
	I0318 04:45:11.901047   17465 logs.go:123] Gathering logs for kube-proxy [894142fdaac1] ...
	I0318 04:45:11.901058   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 894142fdaac1"
	I0318 04:45:11.913855   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:45:11.913865   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:45:11.948985   17465 logs.go:123] Gathering logs for kube-apiserver [0a2982ffb84e] ...
	I0318 04:45:11.949006   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2982ffb84e"
	I0318 04:45:11.965076   17465 logs.go:123] Gathering logs for coredns [fe69be91e435] ...
	I0318 04:45:11.965092   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe69be91e435"
	I0318 04:45:14.480915   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:45:19.482499   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:45:19.482575   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:45:19.493793   17465 logs.go:276] 1 containers: [0a2982ffb84e]
	I0318 04:45:19.493863   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:45:19.505756   17465 logs.go:276] 1 containers: [704a79c3c784]
	I0318 04:45:19.505800   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:45:19.517319   17465 logs.go:276] 4 containers: [43b9c6cc9a8a 0d30a592f036 fe69be91e435 828d4a376c7e]
	I0318 04:45:19.517381   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:45:19.529018   17465 logs.go:276] 1 containers: [e9fc948a1004]
	I0318 04:45:19.529074   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:45:19.543389   17465 logs.go:276] 1 containers: [894142fdaac1]
	I0318 04:45:19.543471   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:45:19.554526   17465 logs.go:276] 1 containers: [1c9856b2b94f]
	I0318 04:45:19.554594   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:45:19.568561   17465 logs.go:276] 0 containers: []
	W0318 04:45:19.568572   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:45:19.568628   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:45:19.578705   17465 logs.go:276] 1 containers: [a247b21e5185]
	I0318 04:45:19.578724   17465 logs.go:123] Gathering logs for coredns [828d4a376c7e] ...
	I0318 04:45:19.578730   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828d4a376c7e"
	I0318 04:45:19.590293   17465 logs.go:123] Gathering logs for kube-scheduler [e9fc948a1004] ...
	I0318 04:45:19.590304   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fc948a1004"
	I0318 04:45:19.605497   17465 logs.go:123] Gathering logs for kube-controller-manager [1c9856b2b94f] ...
	I0318 04:45:19.605505   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c9856b2b94f"
	I0318 04:45:19.623643   17465 logs.go:123] Gathering logs for storage-provisioner [a247b21e5185] ...
	I0318 04:45:19.623652   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a247b21e5185"
	I0318 04:45:19.635177   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:45:19.635187   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:45:19.670989   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:45:19.671007   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:45:19.708993   17465 logs.go:123] Gathering logs for etcd [704a79c3c784] ...
	I0318 04:45:19.709005   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 704a79c3c784"
	I0318 04:45:19.723093   17465 logs.go:123] Gathering logs for coredns [fe69be91e435] ...
	I0318 04:45:19.723111   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe69be91e435"
	I0318 04:45:19.734542   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:45:19.734555   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:45:19.758719   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:45:19.758726   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:45:19.762801   17465 logs.go:123] Gathering logs for coredns [43b9c6cc9a8a] ...
	I0318 04:45:19.762809   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43b9c6cc9a8a"
	I0318 04:45:19.775439   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:45:19.775453   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:45:19.787121   17465 logs.go:123] Gathering logs for kube-apiserver [0a2982ffb84e] ...
	I0318 04:45:19.787133   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2982ffb84e"
	I0318 04:45:19.801933   17465 logs.go:123] Gathering logs for coredns [0d30a592f036] ...
	I0318 04:45:19.801943   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d30a592f036"
	I0318 04:45:19.813699   17465 logs.go:123] Gathering logs for kube-proxy [894142fdaac1] ...
	I0318 04:45:19.813710   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 894142fdaac1"
	I0318 04:45:22.329822   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:45:27.331864   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:45:27.332274   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:45:27.370474   17465 logs.go:276] 1 containers: [0a2982ffb84e]
	I0318 04:45:27.370619   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:45:27.401680   17465 logs.go:276] 1 containers: [704a79c3c784]
	I0318 04:45:27.401759   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:45:27.415650   17465 logs.go:276] 4 containers: [43b9c6cc9a8a 0d30a592f036 fe69be91e435 828d4a376c7e]
	I0318 04:45:27.415724   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:45:27.426770   17465 logs.go:276] 1 containers: [e9fc948a1004]
	I0318 04:45:27.426829   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:45:27.437306   17465 logs.go:276] 1 containers: [894142fdaac1]
	I0318 04:45:27.437378   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:45:27.450418   17465 logs.go:276] 1 containers: [1c9856b2b94f]
	I0318 04:45:27.450489   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:45:27.461105   17465 logs.go:276] 0 containers: []
	W0318 04:45:27.461115   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:45:27.461195   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:45:27.471939   17465 logs.go:276] 1 containers: [a247b21e5185]
	I0318 04:45:27.471955   17465 logs.go:123] Gathering logs for kube-proxy [894142fdaac1] ...
	I0318 04:45:27.471960   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 894142fdaac1"
	I0318 04:45:27.491899   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:45:27.491910   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:45:27.503481   17465 logs.go:123] Gathering logs for coredns [828d4a376c7e] ...
	I0318 04:45:27.503493   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828d4a376c7e"
	I0318 04:45:27.515292   17465 logs.go:123] Gathering logs for coredns [43b9c6cc9a8a] ...
	I0318 04:45:27.515303   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43b9c6cc9a8a"
	I0318 04:45:27.527240   17465 logs.go:123] Gathering logs for storage-provisioner [a247b21e5185] ...
	I0318 04:45:27.527252   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a247b21e5185"
	I0318 04:45:27.538282   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:45:27.538294   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:45:27.561510   17465 logs.go:123] Gathering logs for kube-apiserver [0a2982ffb84e] ...
	I0318 04:45:27.561519   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2982ffb84e"
	I0318 04:45:27.575476   17465 logs.go:123] Gathering logs for etcd [704a79c3c784] ...
	I0318 04:45:27.575488   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 704a79c3c784"
	I0318 04:45:27.589222   17465 logs.go:123] Gathering logs for kube-scheduler [e9fc948a1004] ...
	I0318 04:45:27.589233   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fc948a1004"
	I0318 04:45:27.611014   17465 logs.go:123] Gathering logs for kube-controller-manager [1c9856b2b94f] ...
	I0318 04:45:27.611025   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c9856b2b94f"
	I0318 04:45:27.629205   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:45:27.629215   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:45:27.633876   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:45:27.633882   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:45:27.669763   17465 logs.go:123] Gathering logs for coredns [0d30a592f036] ...
	I0318 04:45:27.669775   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d30a592f036"
	I0318 04:45:27.688422   17465 logs.go:123] Gathering logs for coredns [fe69be91e435] ...
	I0318 04:45:27.688435   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe69be91e435"
	I0318 04:45:27.700269   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:45:27.700281   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:45:30.237428   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:45:35.238175   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:45:35.238289   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0318 04:45:35.249976   17465 logs.go:276] 1 containers: [0a2982ffb84e]
	I0318 04:45:35.250021   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0318 04:45:35.267114   17465 logs.go:276] 1 containers: [704a79c3c784]
	I0318 04:45:35.267179   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0318 04:45:35.281375   17465 logs.go:276] 4 containers: [43b9c6cc9a8a 0d30a592f036 fe69be91e435 828d4a376c7e]
	I0318 04:45:35.281444   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0318 04:45:35.293315   17465 logs.go:276] 1 containers: [e9fc948a1004]
	I0318 04:45:35.293368   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0318 04:45:35.304180   17465 logs.go:276] 1 containers: [894142fdaac1]
	I0318 04:45:35.304238   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0318 04:45:35.319515   17465 logs.go:276] 1 containers: [1c9856b2b94f]
	I0318 04:45:35.319578   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0318 04:45:35.332509   17465 logs.go:276] 0 containers: []
	W0318 04:45:35.332519   17465 logs.go:278] No container was found matching "kindnet"
	I0318 04:45:35.332564   17465 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0318 04:45:35.343588   17465 logs.go:276] 1 containers: [a247b21e5185]
	I0318 04:45:35.343604   17465 logs.go:123] Gathering logs for coredns [43b9c6cc9a8a] ...
	I0318 04:45:35.343608   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 43b9c6cc9a8a"
	I0318 04:45:35.356178   17465 logs.go:123] Gathering logs for coredns [0d30a592f036] ...
	I0318 04:45:35.356190   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d30a592f036"
	I0318 04:45:35.368659   17465 logs.go:123] Gathering logs for coredns [828d4a376c7e] ...
	I0318 04:45:35.368673   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 828d4a376c7e"
	I0318 04:45:35.383327   17465 logs.go:123] Gathering logs for kube-controller-manager [1c9856b2b94f] ...
	I0318 04:45:35.383340   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1c9856b2b94f"
	I0318 04:45:35.401487   17465 logs.go:123] Gathering logs for storage-provisioner [a247b21e5185] ...
	I0318 04:45:35.401501   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a247b21e5185"
	I0318 04:45:35.414885   17465 logs.go:123] Gathering logs for Docker ...
	I0318 04:45:35.414896   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0318 04:45:35.439776   17465 logs.go:123] Gathering logs for kubelet ...
	I0318 04:45:35.439785   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 04:45:35.474714   17465 logs.go:123] Gathering logs for etcd [704a79c3c784] ...
	I0318 04:45:35.474727   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 704a79c3c784"
	I0318 04:45:35.488958   17465 logs.go:123] Gathering logs for container status ...
	I0318 04:45:35.488970   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 04:45:35.501816   17465 logs.go:123] Gathering logs for describe nodes ...
	I0318 04:45:35.501826   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0318 04:45:35.539601   17465 logs.go:123] Gathering logs for kube-apiserver [0a2982ffb84e] ...
	I0318 04:45:35.539618   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0a2982ffb84e"
	I0318 04:45:35.558656   17465 logs.go:123] Gathering logs for kube-scheduler [e9fc948a1004] ...
	I0318 04:45:35.558668   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e9fc948a1004"
	I0318 04:45:35.574741   17465 logs.go:123] Gathering logs for kube-proxy [894142fdaac1] ...
	I0318 04:45:35.574750   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 894142fdaac1"
	I0318 04:45:35.586660   17465 logs.go:123] Gathering logs for dmesg ...
	I0318 04:45:35.586673   17465 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 04:45:35.591169   17465 logs.go:123] Gathering logs for coredns [fe69be91e435] ...
	I0318 04:45:35.591181   17465 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fe69be91e435"
	I0318 04:45:38.104306   17465 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0318 04:45:43.107031   17465 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0318 04:45:43.113180   17465 out.go:177] 
	W0318 04:45:43.118273   17465 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0318 04:45:43.118307   17465 out.go:239] * 
	* 
	W0318 04:45:43.120888   17465 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:45:43.134163   17465 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-126000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (581.57s)

                                                
                                    
x
+
TestPause/serial/Start (9.94s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-369000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-369000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.88711s)

                                                
                                                
-- stdout --
	* [pause-369000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-369000" primary control-plane node in "pause-369000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-369000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-369000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-369000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-369000 -n pause-369000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-369000 -n pause-369000: exit status 7 (50.609958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-369000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (10s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-654000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-654000 --driver=qemu2 : exit status 80 (9.932084125s)

                                                
                                                
-- stdout --
	* [NoKubernetes-654000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-654000" primary control-plane node in "NoKubernetes-654000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-654000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-654000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-654000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-654000 -n NoKubernetes-654000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-654000 -n NoKubernetes-654000: exit status 7 (68.887625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-654000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (10.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-654000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-654000 --no-kubernetes --driver=qemu2 : exit status 80 (5.855223875s)

                                                
                                                
-- stdout --
	* [NoKubernetes-654000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-654000
	* Restarting existing qemu2 VM for "NoKubernetes-654000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-654000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-654000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-654000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-654000 -n NoKubernetes-654000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-654000 -n NoKubernetes-654000: exit status 7 (67.834792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-654000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-654000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-654000 --no-kubernetes --driver=qemu2 : exit status 80 (5.828193584s)

                                                
                                                
-- stdout --
	* [NoKubernetes-654000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-654000
	* Restarting existing qemu2 VM for "NoKubernetes-654000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-654000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-654000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-654000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-654000 -n NoKubernetes-654000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-654000 -n NoKubernetes-654000: exit status 7 (38.034417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-654000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-654000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-654000 --driver=qemu2 : exit status 80 (5.849877208s)

                                                
                                                
-- stdout --
	* [NoKubernetes-654000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-654000
	* Restarting existing qemu2 VM for "NoKubernetes-654000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-654000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-654000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-654000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-654000 -n NoKubernetes-654000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-654000 -n NoKubernetes-654000: exit status 7 (68.266875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-654000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-360000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-360000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.875837625s)

                                                
                                                
-- stdout --
	* [auto-360000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-360000" primary control-plane node in "auto-360000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-360000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:44:22.745175   17714 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:44:22.745319   17714 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:44:22.745323   17714 out.go:304] Setting ErrFile to fd 2...
	I0318 04:44:22.745325   17714 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:44:22.745459   17714 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:44:22.746559   17714 out.go:298] Setting JSON to false
	I0318 04:44:22.763045   17714 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":9835,"bootTime":1710752427,"procs":483,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:44:22.763103   17714 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:44:22.768482   17714 out.go:177] * [auto-360000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:44:22.776351   17714 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 04:44:22.781332   17714 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:44:22.776377   17714 notify.go:220] Checking for updates...
	I0318 04:44:22.788338   17714 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:44:22.792298   17714 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:44:22.795270   17714 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	I0318 04:44:22.798366   17714 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:44:22.801687   17714 config.go:182] Loaded profile config "multinode-969000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:44:22.801758   17714 config.go:182] Loaded profile config "stopped-upgrade-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 04:44:22.801803   17714 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:44:22.806339   17714 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 04:44:22.813322   17714 start.go:297] selected driver: qemu2
	I0318 04:44:22.813327   17714 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:44:22.813332   17714 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:44:22.815705   17714 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:44:22.819277   17714 out.go:177] * Automatically selected the socket_vmnet network
	I0318 04:44:22.823331   17714 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:44:22.823367   17714 cni.go:84] Creating CNI manager for ""
	I0318 04:44:22.823375   17714 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:44:22.823379   17714 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 04:44:22.823418   17714 start.go:340] cluster config:
	{Name:auto-360000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:auto-360000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:44:22.827987   17714 iso.go:125] acquiring lock: {Name:mkb8143674083e0c7a46a3ed751b3800392bcd24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:44:22.836285   17714 out.go:177] * Starting "auto-360000" primary control-plane node in "auto-360000" cluster
	I0318 04:44:22.840323   17714 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:44:22.840340   17714 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:44:22.840353   17714 cache.go:56] Caching tarball of preloaded images
	I0318 04:44:22.840418   17714 preload.go:173] Found /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:44:22.840424   17714 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:44:22.840497   17714 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/auto-360000/config.json ...
	I0318 04:44:22.840513   17714 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/auto-360000/config.json: {Name:mke62c5b0fb437fdefabe8b39f6c6f31795d0f5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:44:22.840740   17714 start.go:360] acquireMachinesLock for auto-360000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:44:22.840776   17714 start.go:364] duration metric: took 28.958µs to acquireMachinesLock for "auto-360000"
	I0318 04:44:22.840791   17714 start.go:93] Provisioning new machine with config: &{Name:auto-360000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.28.4 ClusterName:auto-360000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:44:22.840820   17714 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:44:22.849334   17714 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 04:44:22.867246   17714 start.go:159] libmachine.API.Create for "auto-360000" (driver="qemu2")
	I0318 04:44:22.867275   17714 client.go:168] LocalClient.Create starting
	I0318 04:44:22.867339   17714 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem
	I0318 04:44:22.867372   17714 main.go:141] libmachine: Decoding PEM data...
	I0318 04:44:22.867385   17714 main.go:141] libmachine: Parsing certificate...
	I0318 04:44:22.867436   17714 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem
	I0318 04:44:22.867459   17714 main.go:141] libmachine: Decoding PEM data...
	I0318 04:44:22.867471   17714 main.go:141] libmachine: Parsing certificate...
	I0318 04:44:22.867865   17714 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18429-15072/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:44:23.008790   17714 main.go:141] libmachine: Creating SSH key...
	I0318 04:44:23.108576   17714 main.go:141] libmachine: Creating Disk image...
	I0318 04:44:23.108584   17714 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:44:23.108778   17714 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/auto-360000/disk.qcow2.raw /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/auto-360000/disk.qcow2
	I0318 04:44:23.120946   17714 main.go:141] libmachine: STDOUT: 
	I0318 04:44:23.120973   17714 main.go:141] libmachine: STDERR: 
	I0318 04:44:23.121040   17714 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/auto-360000/disk.qcow2 +20000M
	I0318 04:44:23.132251   17714 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:44:23.132270   17714 main.go:141] libmachine: STDERR: 
	I0318 04:44:23.132284   17714 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/auto-360000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/auto-360000/disk.qcow2
	I0318 04:44:23.132288   17714 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:44:23.132319   17714 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/auto-360000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/auto-360000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/auto-360000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:6f:bb:ba:18:b1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/auto-360000/disk.qcow2
	I0318 04:44:23.134039   17714 main.go:141] libmachine: STDOUT: 
	I0318 04:44:23.134058   17714 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:44:23.134079   17714 client.go:171] duration metric: took 266.808209ms to LocalClient.Create
	I0318 04:44:25.136362   17714 start.go:128] duration metric: took 2.295576208s to createHost
	I0318 04:44:25.136468   17714 start.go:83] releasing machines lock for "auto-360000", held for 2.295757625s
	W0318 04:44:25.136585   17714 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:44:25.147634   17714 out.go:177] * Deleting "auto-360000" in qemu2 ...
	W0318 04:44:25.176131   17714 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:44:25.176181   17714 start.go:728] Will try again in 5 seconds ...
	I0318 04:44:30.178142   17714 start.go:360] acquireMachinesLock for auto-360000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:44:30.178314   17714 start.go:364] duration metric: took 138.75µs to acquireMachinesLock for "auto-360000"
	I0318 04:44:30.178335   17714 start.go:93] Provisioning new machine with config: &{Name:auto-360000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.28.4 ClusterName:auto-360000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:44:30.178480   17714 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:44:30.187734   17714 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 04:44:30.204627   17714 start.go:159] libmachine.API.Create for "auto-360000" (driver="qemu2")
	I0318 04:44:30.204653   17714 client.go:168] LocalClient.Create starting
	I0318 04:44:30.204729   17714 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem
	I0318 04:44:30.204762   17714 main.go:141] libmachine: Decoding PEM data...
	I0318 04:44:30.204771   17714 main.go:141] libmachine: Parsing certificate...
	I0318 04:44:30.204815   17714 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem
	I0318 04:44:30.204836   17714 main.go:141] libmachine: Decoding PEM data...
	I0318 04:44:30.204841   17714 main.go:141] libmachine: Parsing certificate...
	I0318 04:44:30.205110   17714 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18429-15072/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:44:30.345851   17714 main.go:141] libmachine: Creating SSH key...
	I0318 04:44:30.534391   17714 main.go:141] libmachine: Creating Disk image...
	I0318 04:44:30.534403   17714 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:44:30.534614   17714 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/auto-360000/disk.qcow2.raw /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/auto-360000/disk.qcow2
	I0318 04:44:30.547401   17714 main.go:141] libmachine: STDOUT: 
	I0318 04:44:30.547423   17714 main.go:141] libmachine: STDERR: 
	I0318 04:44:30.547481   17714 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/auto-360000/disk.qcow2 +20000M
	I0318 04:44:30.558374   17714 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:44:30.558394   17714 main.go:141] libmachine: STDERR: 
	I0318 04:44:30.558418   17714 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/auto-360000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/auto-360000/disk.qcow2
	I0318 04:44:30.558424   17714 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:44:30.558455   17714 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/auto-360000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/auto-360000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/auto-360000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:87:51:d7:3c:6a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/auto-360000/disk.qcow2
	I0318 04:44:30.560385   17714 main.go:141] libmachine: STDOUT: 
	I0318 04:44:30.560404   17714 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:44:30.560418   17714 client.go:171] duration metric: took 355.773959ms to LocalClient.Create
	I0318 04:44:32.562455   17714 start.go:128] duration metric: took 2.384024s to createHost
	I0318 04:44:32.562472   17714 start.go:83] releasing machines lock for "auto-360000", held for 2.38423175s
	W0318 04:44:32.562541   17714 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-360000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-360000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:44:32.568755   17714 out.go:177] 
	W0318 04:44:32.571934   17714 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:44:32.571943   17714 out.go:239] * 
	* 
	W0318 04:44:32.572517   17714 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:44:32.581825   17714 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-360000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-360000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.895273709s)

                                                
                                                
-- stdout --
	* [kindnet-360000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-360000" primary control-plane node in "kindnet-360000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-360000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:44:34.813156   17824 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:44:34.813529   17824 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:44:34.813534   17824 out.go:304] Setting ErrFile to fd 2...
	I0318 04:44:34.813537   17824 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:44:34.813722   17824 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:44:34.815202   17824 out.go:298] Setting JSON to false
	I0318 04:44:34.832049   17824 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":9847,"bootTime":1710752427,"procs":483,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:44:34.832120   17824 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:44:34.838982   17824 out.go:177] * [kindnet-360000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:44:34.846950   17824 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 04:44:34.846994   17824 notify.go:220] Checking for updates...
	I0318 04:44:34.850972   17824 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:44:34.853958   17824 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:44:34.856981   17824 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:44:34.859981   17824 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	I0318 04:44:34.862933   17824 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:44:34.866402   17824 config.go:182] Loaded profile config "multinode-969000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:44:34.866477   17824 config.go:182] Loaded profile config "stopped-upgrade-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 04:44:34.866519   17824 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:44:34.870968   17824 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 04:44:34.877886   17824 start.go:297] selected driver: qemu2
	I0318 04:44:34.877891   17824 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:44:34.877896   17824 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:44:34.880160   17824 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:44:34.883933   17824 out.go:177] * Automatically selected the socket_vmnet network
	I0318 04:44:34.886932   17824 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:44:34.886961   17824 cni.go:84] Creating CNI manager for "kindnet"
	I0318 04:44:34.886964   17824 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0318 04:44:34.886991   17824 start.go:340] cluster config:
	{Name:kindnet-360000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kindnet-360000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:44:34.891153   17824 iso.go:125] acquiring lock: {Name:mkb8143674083e0c7a46a3ed751b3800392bcd24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:44:34.898778   17824 out.go:177] * Starting "kindnet-360000" primary control-plane node in "kindnet-360000" cluster
	I0318 04:44:34.902936   17824 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:44:34.902951   17824 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:44:34.902959   17824 cache.go:56] Caching tarball of preloaded images
	I0318 04:44:34.903020   17824 preload.go:173] Found /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:44:34.903028   17824 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:44:34.903100   17824 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/kindnet-360000/config.json ...
	I0318 04:44:34.903111   17824 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/kindnet-360000/config.json: {Name:mk9d6f595d82a7dbbf5d4d8843fce0292c4dcf75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:44:34.903317   17824 start.go:360] acquireMachinesLock for kindnet-360000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:44:34.903347   17824 start.go:364] duration metric: took 24.25µs to acquireMachinesLock for "kindnet-360000"
	I0318 04:44:34.903359   17824 start.go:93] Provisioning new machine with config: &{Name:kindnet-360000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:kindnet-360000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:44:34.903386   17824 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:44:34.911922   17824 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 04:44:34.927336   17824 start.go:159] libmachine.API.Create for "kindnet-360000" (driver="qemu2")
	I0318 04:44:34.927374   17824 client.go:168] LocalClient.Create starting
	I0318 04:44:34.927439   17824 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem
	I0318 04:44:34.927469   17824 main.go:141] libmachine: Decoding PEM data...
	I0318 04:44:34.927483   17824 main.go:141] libmachine: Parsing certificate...
	I0318 04:44:34.927530   17824 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem
	I0318 04:44:34.927551   17824 main.go:141] libmachine: Decoding PEM data...
	I0318 04:44:34.927556   17824 main.go:141] libmachine: Parsing certificate...
	I0318 04:44:34.927917   17824 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18429-15072/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:44:35.067945   17824 main.go:141] libmachine: Creating SSH key...
	I0318 04:44:35.271149   17824 main.go:141] libmachine: Creating Disk image...
	I0318 04:44:35.271161   17824 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:44:35.271366   17824 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kindnet-360000/disk.qcow2.raw /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kindnet-360000/disk.qcow2
	I0318 04:44:35.284421   17824 main.go:141] libmachine: STDOUT: 
	I0318 04:44:35.284441   17824 main.go:141] libmachine: STDERR: 
	I0318 04:44:35.284503   17824 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kindnet-360000/disk.qcow2 +20000M
	I0318 04:44:35.295959   17824 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:44:35.295979   17824 main.go:141] libmachine: STDERR: 
	I0318 04:44:35.295993   17824 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kindnet-360000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kindnet-360000/disk.qcow2
	I0318 04:44:35.295997   17824 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:44:35.296046   17824 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kindnet-360000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kindnet-360000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kindnet-360000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8a:ca:2f:34:9f:27 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kindnet-360000/disk.qcow2
	I0318 04:44:35.297963   17824 main.go:141] libmachine: STDOUT: 
	I0318 04:44:35.297989   17824 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:44:35.298007   17824 client.go:171] duration metric: took 370.640042ms to LocalClient.Create
	I0318 04:44:37.300102   17824 start.go:128] duration metric: took 2.396776291s to createHost
	I0318 04:44:37.300165   17824 start.go:83] releasing machines lock for "kindnet-360000", held for 2.396891958s
	W0318 04:44:37.300218   17824 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:44:37.311395   17824 out.go:177] * Deleting "kindnet-360000" in qemu2 ...
	W0318 04:44:37.331953   17824 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:44:37.331965   17824 start.go:728] Will try again in 5 seconds ...
	I0318 04:44:42.333920   17824 start.go:360] acquireMachinesLock for kindnet-360000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:44:42.334228   17824 start.go:364] duration metric: took 265.917µs to acquireMachinesLock for "kindnet-360000"
	I0318 04:44:42.334308   17824 start.go:93] Provisioning new machine with config: &{Name:kindnet-360000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:kindnet-360000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:44:42.334405   17824 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:44:42.339707   17824 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 04:44:42.366281   17824 start.go:159] libmachine.API.Create for "kindnet-360000" (driver="qemu2")
	I0318 04:44:42.366335   17824 client.go:168] LocalClient.Create starting
	I0318 04:44:42.366421   17824 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem
	I0318 04:44:42.366468   17824 main.go:141] libmachine: Decoding PEM data...
	I0318 04:44:42.366481   17824 main.go:141] libmachine: Parsing certificate...
	I0318 04:44:42.366532   17824 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem
	I0318 04:44:42.366564   17824 main.go:141] libmachine: Decoding PEM data...
	I0318 04:44:42.366573   17824 main.go:141] libmachine: Parsing certificate...
	I0318 04:44:42.366964   17824 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18429-15072/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:44:42.507519   17824 main.go:141] libmachine: Creating SSH key...
	I0318 04:44:42.597591   17824 main.go:141] libmachine: Creating Disk image...
	I0318 04:44:42.597602   17824 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:44:42.597791   17824 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kindnet-360000/disk.qcow2.raw /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kindnet-360000/disk.qcow2
	I0318 04:44:42.610666   17824 main.go:141] libmachine: STDOUT: 
	I0318 04:44:42.610686   17824 main.go:141] libmachine: STDERR: 
	I0318 04:44:42.610749   17824 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kindnet-360000/disk.qcow2 +20000M
	I0318 04:44:42.621805   17824 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:44:42.621825   17824 main.go:141] libmachine: STDERR: 
	I0318 04:44:42.621835   17824 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kindnet-360000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kindnet-360000/disk.qcow2
	I0318 04:44:42.621838   17824 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:44:42.621880   17824 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kindnet-360000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kindnet-360000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kindnet-360000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7e:3f:34:72:17:79 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kindnet-360000/disk.qcow2
	I0318 04:44:42.623551   17824 main.go:141] libmachine: STDOUT: 
	I0318 04:44:42.623568   17824 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:44:42.623579   17824 client.go:171] duration metric: took 257.248084ms to LocalClient.Create
	I0318 04:44:44.625837   17824 start.go:128] duration metric: took 2.291451625s to createHost
	I0318 04:44:44.625957   17824 start.go:83] releasing machines lock for "kindnet-360000", held for 2.291789916s
	W0318 04:44:44.626330   17824 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-360000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-360000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:44:44.645048   17824 out.go:177] 
	W0318 04:44:44.649044   17824 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:44:44.649067   17824 out.go:239] * 
	* 
	W0318 04:44:44.651934   17824 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:44:44.662596   17824 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (10s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-360000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-360000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.994839209s)

                                                
                                                
-- stdout --
	* [calico-360000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-360000" primary control-plane node in "calico-360000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-360000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:44:47.036810   17941 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:44:47.036948   17941 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:44:47.036952   17941 out.go:304] Setting ErrFile to fd 2...
	I0318 04:44:47.036954   17941 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:44:47.037094   17941 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:44:47.038241   17941 out.go:298] Setting JSON to false
	I0318 04:44:47.054990   17941 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":9860,"bootTime":1710752427,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:44:47.055067   17941 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:44:47.060399   17941 out.go:177] * [calico-360000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:44:47.073423   17941 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 04:44:47.068497   17941 notify.go:220] Checking for updates...
	I0318 04:44:47.087763   17941 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:44:47.096983   17941 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:44:47.104349   17941 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:44:47.107437   17941 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	I0318 04:44:47.110376   17941 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:44:47.113834   17941 config.go:182] Loaded profile config "multinode-969000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:44:47.113906   17941 config.go:182] Loaded profile config "stopped-upgrade-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 04:44:47.113966   17941 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:44:47.118348   17941 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 04:44:47.125322   17941 start.go:297] selected driver: qemu2
	I0318 04:44:47.125330   17941 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:44:47.125344   17941 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:44:47.127888   17941 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:44:47.132416   17941 out.go:177] * Automatically selected the socket_vmnet network
	I0318 04:44:47.135503   17941 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:44:47.135555   17941 cni.go:84] Creating CNI manager for "calico"
	I0318 04:44:47.135561   17941 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0318 04:44:47.135609   17941 start.go:340] cluster config:
	{Name:calico-360000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:calico-360000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:44:47.140781   17941 iso.go:125] acquiring lock: {Name:mkb8143674083e0c7a46a3ed751b3800392bcd24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:44:47.149324   17941 out.go:177] * Starting "calico-360000" primary control-plane node in "calico-360000" cluster
	I0318 04:44:47.153418   17941 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:44:47.153438   17941 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:44:47.153451   17941 cache.go:56] Caching tarball of preloaded images
	I0318 04:44:47.153522   17941 preload.go:173] Found /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:44:47.153529   17941 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:44:47.153605   17941 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/calico-360000/config.json ...
	I0318 04:44:47.153617   17941 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/calico-360000/config.json: {Name:mk653e00234f0c6382ef0d74512edbfcace474c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:44:47.153860   17941 start.go:360] acquireMachinesLock for calico-360000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:44:47.153895   17941 start.go:364] duration metric: took 28.75µs to acquireMachinesLock for "calico-360000"
	I0318 04:44:47.153910   17941 start.go:93] Provisioning new machine with config: &{Name:calico-360000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:calico-360000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:44:47.153944   17941 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:44:47.158393   17941 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 04:44:47.178070   17941 start.go:159] libmachine.API.Create for "calico-360000" (driver="qemu2")
	I0318 04:44:47.178102   17941 client.go:168] LocalClient.Create starting
	I0318 04:44:47.178168   17941 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem
	I0318 04:44:47.178202   17941 main.go:141] libmachine: Decoding PEM data...
	I0318 04:44:47.178212   17941 main.go:141] libmachine: Parsing certificate...
	I0318 04:44:47.178266   17941 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem
	I0318 04:44:47.178291   17941 main.go:141] libmachine: Decoding PEM data...
	I0318 04:44:47.178299   17941 main.go:141] libmachine: Parsing certificate...
	I0318 04:44:47.178802   17941 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18429-15072/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:44:47.326800   17941 main.go:141] libmachine: Creating SSH key...
	I0318 04:44:47.448822   17941 main.go:141] libmachine: Creating Disk image...
	I0318 04:44:47.448835   17941 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:44:47.449029   17941 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/calico-360000/disk.qcow2.raw /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/calico-360000/disk.qcow2
	I0318 04:44:47.461368   17941 main.go:141] libmachine: STDOUT: 
	I0318 04:44:47.461391   17941 main.go:141] libmachine: STDERR: 
	I0318 04:44:47.461445   17941 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/calico-360000/disk.qcow2 +20000M
	I0318 04:44:47.472395   17941 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:44:47.472413   17941 main.go:141] libmachine: STDERR: 
	I0318 04:44:47.472437   17941 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/calico-360000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/calico-360000/disk.qcow2
	I0318 04:44:47.472443   17941 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:44:47.472471   17941 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/calico-360000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/calico-360000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/calico-360000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:d5:ff:d5:d5:f3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/calico-360000/disk.qcow2
	I0318 04:44:47.474198   17941 main.go:141] libmachine: STDOUT: 
	I0318 04:44:47.474216   17941 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:44:47.474237   17941 client.go:171] duration metric: took 296.138834ms to LocalClient.Create
	I0318 04:44:49.476465   17941 start.go:128] duration metric: took 2.322570292s to createHost
	I0318 04:44:49.476576   17941 start.go:83] releasing machines lock for "calico-360000", held for 2.322748s
	W0318 04:44:49.476709   17941 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:44:49.492834   17941 out.go:177] * Deleting "calico-360000" in qemu2 ...
	W0318 04:44:49.517992   17941 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:44:49.518020   17941 start.go:728] Will try again in 5 seconds ...
	I0318 04:44:54.519626   17941 start.go:360] acquireMachinesLock for calico-360000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:44:54.520030   17941 start.go:364] duration metric: took 291.042µs to acquireMachinesLock for "calico-360000"
	I0318 04:44:54.520136   17941 start.go:93] Provisioning new machine with config: &{Name:calico-360000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:calico-360000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:44:54.520340   17941 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:44:54.529141   17941 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 04:44:54.572134   17941 start.go:159] libmachine.API.Create for "calico-360000" (driver="qemu2")
	I0318 04:44:54.572190   17941 client.go:168] LocalClient.Create starting
	I0318 04:44:54.572288   17941 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem
	I0318 04:44:54.572352   17941 main.go:141] libmachine: Decoding PEM data...
	I0318 04:44:54.572370   17941 main.go:141] libmachine: Parsing certificate...
	I0318 04:44:54.572449   17941 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem
	I0318 04:44:54.572499   17941 main.go:141] libmachine: Decoding PEM data...
	I0318 04:44:54.572512   17941 main.go:141] libmachine: Parsing certificate...
	I0318 04:44:54.573077   17941 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18429-15072/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:44:54.734835   17941 main.go:141] libmachine: Creating SSH key...
	I0318 04:44:54.926077   17941 main.go:141] libmachine: Creating Disk image...
	I0318 04:44:54.926089   17941 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:44:54.926287   17941 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/calico-360000/disk.qcow2.raw /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/calico-360000/disk.qcow2
	I0318 04:44:54.939105   17941 main.go:141] libmachine: STDOUT: 
	I0318 04:44:54.939127   17941 main.go:141] libmachine: STDERR: 
	I0318 04:44:54.939189   17941 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/calico-360000/disk.qcow2 +20000M
	I0318 04:44:54.950191   17941 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:44:54.950207   17941 main.go:141] libmachine: STDERR: 
	I0318 04:44:54.950222   17941 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/calico-360000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/calico-360000/disk.qcow2
	I0318 04:44:54.950228   17941 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:44:54.950274   17941 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/calico-360000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/calico-360000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/calico-360000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:97:d3:55:07:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/calico-360000/disk.qcow2
	I0318 04:44:54.952051   17941 main.go:141] libmachine: STDOUT: 
	I0318 04:44:54.952065   17941 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:44:54.952079   17941 client.go:171] duration metric: took 379.896625ms to LocalClient.Create
	I0318 04:44:56.954224   17941 start.go:128] duration metric: took 2.43392575s to createHost
	I0318 04:44:56.954301   17941 start.go:83] releasing machines lock for "calico-360000", held for 2.434334542s
	W0318 04:44:56.954823   17941 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-360000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-360000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:44:56.966484   17941 out.go:177] 
	W0318 04:44:56.972711   17941 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:44:56.972745   17941 out.go:239] * 
	* 
	W0318 04:44:56.975349   17941 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:44:56.991492   17941 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (10.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-360000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-360000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.85105925s)

                                                
                                                
-- stdout --
	* [custom-flannel-360000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-360000" primary control-plane node in "custom-flannel-360000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-360000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:44:59.496920   18059 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:44:59.497072   18059 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:44:59.497076   18059 out.go:304] Setting ErrFile to fd 2...
	I0318 04:44:59.497078   18059 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:44:59.497209   18059 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:44:59.498268   18059 out.go:298] Setting JSON to false
	I0318 04:44:59.514764   18059 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":9872,"bootTime":1710752427,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:44:59.514820   18059 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:44:59.520691   18059 out.go:177] * [custom-flannel-360000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:44:59.528735   18059 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 04:44:59.528776   18059 notify.go:220] Checking for updates...
	I0318 04:44:59.532723   18059 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:44:59.535847   18059 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:44:59.538703   18059 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:44:59.541709   18059 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	I0318 04:44:59.544730   18059 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:44:59.548156   18059 config.go:182] Loaded profile config "multinode-969000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:44:59.548220   18059 config.go:182] Loaded profile config "stopped-upgrade-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 04:44:59.548265   18059 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:44:59.552680   18059 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 04:44:59.559727   18059 start.go:297] selected driver: qemu2
	I0318 04:44:59.559733   18059 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:44:59.559739   18059 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:44:59.562230   18059 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:44:59.564666   18059 out.go:177] * Automatically selected the socket_vmnet network
	I0318 04:44:59.567792   18059 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:44:59.567842   18059 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0318 04:44:59.567862   18059 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0318 04:44:59.567908   18059 start.go:340] cluster config:
	{Name:custom-flannel-360000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:custom-flannel-360000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:44:59.572496   18059 iso.go:125] acquiring lock: {Name:mkb8143674083e0c7a46a3ed751b3800392bcd24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:44:59.579677   18059 out.go:177] * Starting "custom-flannel-360000" primary control-plane node in "custom-flannel-360000" cluster
	I0318 04:44:59.583757   18059 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:44:59.583778   18059 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:44:59.583794   18059 cache.go:56] Caching tarball of preloaded images
	I0318 04:44:59.583872   18059 preload.go:173] Found /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:44:59.583878   18059 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:44:59.583939   18059 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/custom-flannel-360000/config.json ...
	I0318 04:44:59.583951   18059 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/custom-flannel-360000/config.json: {Name:mk241e75a40e61b865ff3d15b761877fac0220fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:44:59.584186   18059 start.go:360] acquireMachinesLock for custom-flannel-360000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:44:59.584222   18059 start.go:364] duration metric: took 27.584µs to acquireMachinesLock for "custom-flannel-360000"
	I0318 04:44:59.584235   18059 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-360000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:custom-flannel-360000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:44:59.584283   18059 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:44:59.592723   18059 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 04:44:59.610386   18059 start.go:159] libmachine.API.Create for "custom-flannel-360000" (driver="qemu2")
	I0318 04:44:59.610416   18059 client.go:168] LocalClient.Create starting
	I0318 04:44:59.610472   18059 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem
	I0318 04:44:59.610504   18059 main.go:141] libmachine: Decoding PEM data...
	I0318 04:44:59.610512   18059 main.go:141] libmachine: Parsing certificate...
	I0318 04:44:59.610559   18059 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem
	I0318 04:44:59.610581   18059 main.go:141] libmachine: Decoding PEM data...
	I0318 04:44:59.610590   18059 main.go:141] libmachine: Parsing certificate...
	I0318 04:44:59.610989   18059 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18429-15072/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:44:59.763684   18059 main.go:141] libmachine: Creating SSH key...
	I0318 04:44:59.823796   18059 main.go:141] libmachine: Creating Disk image...
	I0318 04:44:59.823802   18059 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:44:59.823988   18059 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/custom-flannel-360000/disk.qcow2.raw /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/custom-flannel-360000/disk.qcow2
	I0318 04:44:59.836202   18059 main.go:141] libmachine: STDOUT: 
	I0318 04:44:59.836226   18059 main.go:141] libmachine: STDERR: 
	I0318 04:44:59.836291   18059 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/custom-flannel-360000/disk.qcow2 +20000M
	I0318 04:44:59.847655   18059 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:44:59.847672   18059 main.go:141] libmachine: STDERR: 
	I0318 04:44:59.847692   18059 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/custom-flannel-360000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/custom-flannel-360000/disk.qcow2
	I0318 04:44:59.847696   18059 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:44:59.847721   18059 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/custom-flannel-360000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/custom-flannel-360000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/custom-flannel-360000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:62:af:6c:1c:9a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/custom-flannel-360000/disk.qcow2
	I0318 04:44:59.849569   18059 main.go:141] libmachine: STDOUT: 
	I0318 04:44:59.849590   18059 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:44:59.849610   18059 client.go:171] duration metric: took 239.196333ms to LocalClient.Create
	I0318 04:45:01.851792   18059 start.go:128] duration metric: took 2.26754475s to createHost
	I0318 04:45:01.851861   18059 start.go:83] releasing machines lock for "custom-flannel-360000", held for 2.267707166s
	W0318 04:45:01.851922   18059 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:45:01.863537   18059 out.go:177] * Deleting "custom-flannel-360000" in qemu2 ...
	W0318 04:45:01.884391   18059 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:45:01.884415   18059 start.go:728] Will try again in 5 seconds ...
	I0318 04:45:06.886399   18059 start.go:360] acquireMachinesLock for custom-flannel-360000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:45:06.886845   18059 start.go:364] duration metric: took 362.084µs to acquireMachinesLock for "custom-flannel-360000"
	I0318 04:45:06.886906   18059 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-360000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.28.4 ClusterName:custom-flannel-360000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:45:06.887193   18059 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:45:06.896790   18059 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 04:45:06.938424   18059 start.go:159] libmachine.API.Create for "custom-flannel-360000" (driver="qemu2")
	I0318 04:45:06.938482   18059 client.go:168] LocalClient.Create starting
	I0318 04:45:06.938585   18059 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem
	I0318 04:45:06.938637   18059 main.go:141] libmachine: Decoding PEM data...
	I0318 04:45:06.938650   18059 main.go:141] libmachine: Parsing certificate...
	I0318 04:45:06.938728   18059 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem
	I0318 04:45:06.938766   18059 main.go:141] libmachine: Decoding PEM data...
	I0318 04:45:06.938779   18059 main.go:141] libmachine: Parsing certificate...
	I0318 04:45:06.939251   18059 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18429-15072/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:45:07.088725   18059 main.go:141] libmachine: Creating SSH key...
	I0318 04:45:07.247503   18059 main.go:141] libmachine: Creating Disk image...
	I0318 04:45:07.247517   18059 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:45:07.247737   18059 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/custom-flannel-360000/disk.qcow2.raw /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/custom-flannel-360000/disk.qcow2
	I0318 04:45:07.260620   18059 main.go:141] libmachine: STDOUT: 
	I0318 04:45:07.260645   18059 main.go:141] libmachine: STDERR: 
	I0318 04:45:07.260698   18059 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/custom-flannel-360000/disk.qcow2 +20000M
	I0318 04:45:07.271401   18059 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:45:07.271423   18059 main.go:141] libmachine: STDERR: 
	I0318 04:45:07.271436   18059 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/custom-flannel-360000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/custom-flannel-360000/disk.qcow2
	I0318 04:45:07.271440   18059 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:45:07.271477   18059 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/custom-flannel-360000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/custom-flannel-360000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/custom-flannel-360000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:25:95:fb:0a:97 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/custom-flannel-360000/disk.qcow2
	I0318 04:45:07.273287   18059 main.go:141] libmachine: STDOUT: 
	I0318 04:45:07.273326   18059 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:45:07.273340   18059 client.go:171] duration metric: took 334.863916ms to LocalClient.Create
	I0318 04:45:09.275490   18059 start.go:128] duration metric: took 2.388339625s to createHost
	I0318 04:45:09.275582   18059 start.go:83] releasing machines lock for "custom-flannel-360000", held for 2.388796583s
	W0318 04:45:09.275958   18059 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-360000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-360000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:45:09.285451   18059 out.go:177] 
	W0318 04:45:09.291799   18059 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:45:09.291838   18059 out.go:239] * 
	* 
	W0318 04:45:09.294260   18059 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:45:09.307644   18059 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-360000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-360000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.812046166s)

                                                
                                                
-- stdout --
	* [false-360000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-360000" primary control-plane node in "false-360000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-360000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:45:11.799455   18179 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:45:11.799580   18179 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:45:11.799583   18179 out.go:304] Setting ErrFile to fd 2...
	I0318 04:45:11.799585   18179 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:45:11.799725   18179 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:45:11.800996   18179 out.go:298] Setting JSON to false
	I0318 04:45:11.819750   18179 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":9884,"bootTime":1710752427,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:45:11.819831   18179 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:45:11.824788   18179 out.go:177] * [false-360000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:45:11.832823   18179 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 04:45:11.836722   18179 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:45:11.832841   18179 notify.go:220] Checking for updates...
	I0318 04:45:11.842764   18179 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:45:11.845775   18179 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:45:11.848825   18179 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	I0318 04:45:11.851785   18179 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:45:11.855128   18179 config.go:182] Loaded profile config "multinode-969000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:45:11.855193   18179 config.go:182] Loaded profile config "stopped-upgrade-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 04:45:11.855247   18179 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:45:11.859760   18179 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 04:45:11.866780   18179 start.go:297] selected driver: qemu2
	I0318 04:45:11.866786   18179 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:45:11.866799   18179 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:45:11.869160   18179 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:45:11.871758   18179 out.go:177] * Automatically selected the socket_vmnet network
	I0318 04:45:11.874932   18179 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:45:11.874970   18179 cni.go:84] Creating CNI manager for "false"
	I0318 04:45:11.875017   18179 start.go:340] cluster config:
	{Name:false-360000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:false-360000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:45:11.879941   18179 iso.go:125] acquiring lock: {Name:mkb8143674083e0c7a46a3ed751b3800392bcd24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:45:11.888763   18179 out.go:177] * Starting "false-360000" primary control-plane node in "false-360000" cluster
	I0318 04:45:11.891803   18179 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:45:11.891833   18179 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:45:11.891841   18179 cache.go:56] Caching tarball of preloaded images
	I0318 04:45:11.891916   18179 preload.go:173] Found /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:45:11.891922   18179 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:45:11.891991   18179 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/false-360000/config.json ...
	I0318 04:45:11.892003   18179 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/false-360000/config.json: {Name:mke7aee6a2446fe98d58b48dfe03fe595d8ee033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:45:11.892297   18179 start.go:360] acquireMachinesLock for false-360000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:45:11.892326   18179 start.go:364] duration metric: took 24.125µs to acquireMachinesLock for "false-360000"
	I0318 04:45:11.892339   18179 start.go:93] Provisioning new machine with config: &{Name:false-360000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:false-360000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:45:11.892375   18179 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:45:11.896830   18179 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 04:45:11.912599   18179 start.go:159] libmachine.API.Create for "false-360000" (driver="qemu2")
	I0318 04:45:11.912623   18179 client.go:168] LocalClient.Create starting
	I0318 04:45:11.912698   18179 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem
	I0318 04:45:11.912728   18179 main.go:141] libmachine: Decoding PEM data...
	I0318 04:45:11.912739   18179 main.go:141] libmachine: Parsing certificate...
	I0318 04:45:11.912809   18179 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem
	I0318 04:45:11.912831   18179 main.go:141] libmachine: Decoding PEM data...
	I0318 04:45:11.912841   18179 main.go:141] libmachine: Parsing certificate...
	I0318 04:45:11.913229   18179 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18429-15072/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:45:12.052383   18179 main.go:141] libmachine: Creating SSH key...
	I0318 04:45:12.109549   18179 main.go:141] libmachine: Creating Disk image...
	I0318 04:45:12.109560   18179 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:45:12.109733   18179 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/false-360000/disk.qcow2.raw /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/false-360000/disk.qcow2
	I0318 04:45:12.122508   18179 main.go:141] libmachine: STDOUT: 
	I0318 04:45:12.122539   18179 main.go:141] libmachine: STDERR: 
	I0318 04:45:12.122597   18179 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/false-360000/disk.qcow2 +20000M
	I0318 04:45:12.133344   18179 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:45:12.133372   18179 main.go:141] libmachine: STDERR: 
	I0318 04:45:12.133385   18179 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/false-360000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/false-360000/disk.qcow2
	I0318 04:45:12.133390   18179 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:45:12.133421   18179 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/false-360000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/false-360000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/false-360000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f2:05:a3:4f:d4:14 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/false-360000/disk.qcow2
	I0318 04:45:12.135126   18179 main.go:141] libmachine: STDOUT: 
	I0318 04:45:12.135142   18179 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:45:12.135163   18179 client.go:171] duration metric: took 222.54375ms to LocalClient.Create
	I0318 04:45:14.137417   18179 start.go:128] duration metric: took 2.245085709s to createHost
	I0318 04:45:14.137522   18179 start.go:83] releasing machines lock for "false-360000", held for 2.245262333s
	W0318 04:45:14.137583   18179 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:45:14.147356   18179 out.go:177] * Deleting "false-360000" in qemu2 ...
	W0318 04:45:14.176156   18179 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:45:14.176193   18179 start.go:728] Will try again in 5 seconds ...
	I0318 04:45:19.178284   18179 start.go:360] acquireMachinesLock for false-360000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:45:19.178795   18179 start.go:364] duration metric: took 386.5µs to acquireMachinesLock for "false-360000"
	I0318 04:45:19.178954   18179 start.go:93] Provisioning new machine with config: &{Name:false-360000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:false-360000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:45:19.179188   18179 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:45:19.184863   18179 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 04:45:19.233205   18179 start.go:159] libmachine.API.Create for "false-360000" (driver="qemu2")
	I0318 04:45:19.233255   18179 client.go:168] LocalClient.Create starting
	I0318 04:45:19.233373   18179 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem
	I0318 04:45:19.233440   18179 main.go:141] libmachine: Decoding PEM data...
	I0318 04:45:19.233453   18179 main.go:141] libmachine: Parsing certificate...
	I0318 04:45:19.233507   18179 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem
	I0318 04:45:19.233548   18179 main.go:141] libmachine: Decoding PEM data...
	I0318 04:45:19.233558   18179 main.go:141] libmachine: Parsing certificate...
	I0318 04:45:19.234141   18179 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18429-15072/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:45:19.381792   18179 main.go:141] libmachine: Creating SSH key...
	I0318 04:45:19.504349   18179 main.go:141] libmachine: Creating Disk image...
	I0318 04:45:19.504368   18179 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:45:19.504590   18179 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/false-360000/disk.qcow2.raw /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/false-360000/disk.qcow2
	I0318 04:45:19.518808   18179 main.go:141] libmachine: STDOUT: 
	I0318 04:45:19.518827   18179 main.go:141] libmachine: STDERR: 
	I0318 04:45:19.518910   18179 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/false-360000/disk.qcow2 +20000M
	I0318 04:45:19.531687   18179 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:45:19.531719   18179 main.go:141] libmachine: STDERR: 
	I0318 04:45:19.531738   18179 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/false-360000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/false-360000/disk.qcow2
	I0318 04:45:19.531743   18179 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:45:19.531773   18179 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/false-360000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/false-360000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/false-360000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:51:6c:f9:49:b3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/false-360000/disk.qcow2
	I0318 04:45:19.533928   18179 main.go:141] libmachine: STDOUT: 
	I0318 04:45:19.533949   18179 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:45:19.533963   18179 client.go:171] duration metric: took 300.71175ms to LocalClient.Create
	I0318 04:45:21.536082   18179 start.go:128] duration metric: took 2.356910916s to createHost
	I0318 04:45:21.536173   18179 start.go:83] releasing machines lock for "false-360000", held for 2.357434875s
	W0318 04:45:21.536439   18179 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-360000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-360000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:45:21.545054   18179 out.go:177] 
	W0318 04:45:21.551085   18179 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:45:21.551125   18179 out.go:239] * 
	* 
	W0318 04:45:21.553393   18179 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:45:21.563027   18179 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-360000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-360000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.824586667s)

                                                
                                                
-- stdout --
	* [enable-default-cni-360000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-360000" primary control-plane node in "enable-default-cni-360000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-360000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:45:23.860579   18289 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:45:23.860713   18289 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:45:23.860716   18289 out.go:304] Setting ErrFile to fd 2...
	I0318 04:45:23.860719   18289 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:45:23.860857   18289 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:45:23.861947   18289 out.go:298] Setting JSON to false
	I0318 04:45:23.878282   18289 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":9896,"bootTime":1710752427,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:45:23.878349   18289 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:45:23.884146   18289 out.go:177] * [enable-default-cni-360000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:45:23.891203   18289 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 04:45:23.895167   18289 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:45:23.891235   18289 notify.go:220] Checking for updates...
	I0318 04:45:23.901119   18289 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:45:23.905202   18289 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:45:23.908207   18289 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	I0318 04:45:23.911209   18289 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:45:23.914583   18289 config.go:182] Loaded profile config "multinode-969000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:45:23.914652   18289 config.go:182] Loaded profile config "stopped-upgrade-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 04:45:23.914696   18289 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:45:23.919183   18289 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 04:45:23.926238   18289 start.go:297] selected driver: qemu2
	I0318 04:45:23.926244   18289 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:45:23.926256   18289 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:45:23.928613   18289 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:45:23.932182   18289 out.go:177] * Automatically selected the socket_vmnet network
	E0318 04:45:23.935277   18289 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0318 04:45:23.935292   18289 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:45:23.935341   18289 cni.go:84] Creating CNI manager for "bridge"
	I0318 04:45:23.935345   18289 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 04:45:23.935377   18289 start.go:340] cluster config:
	{Name:enable-default-cni-360000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-360000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:45:23.939713   18289 iso.go:125] acquiring lock: {Name:mkb8143674083e0c7a46a3ed751b3800392bcd24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:45:23.947143   18289 out.go:177] * Starting "enable-default-cni-360000" primary control-plane node in "enable-default-cni-360000" cluster
	I0318 04:45:23.950173   18289 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:45:23.950190   18289 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:45:23.950202   18289 cache.go:56] Caching tarball of preloaded images
	I0318 04:45:23.950281   18289 preload.go:173] Found /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:45:23.950287   18289 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:45:23.950346   18289 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/enable-default-cni-360000/config.json ...
	I0318 04:45:23.950362   18289 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/enable-default-cni-360000/config.json: {Name:mk1c195a7c22b0d49601fd9f9958655476e83961 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:45:23.950568   18289 start.go:360] acquireMachinesLock for enable-default-cni-360000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:45:23.950603   18289 start.go:364] duration metric: took 25.333µs to acquireMachinesLock for "enable-default-cni-360000"
	I0318 04:45:23.950617   18289 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-360000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-360000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:45:23.950647   18289 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:45:23.959141   18289 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 04:45:23.975911   18289 start.go:159] libmachine.API.Create for "enable-default-cni-360000" (driver="qemu2")
	I0318 04:45:23.975941   18289 client.go:168] LocalClient.Create starting
	I0318 04:45:23.976003   18289 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem
	I0318 04:45:23.976032   18289 main.go:141] libmachine: Decoding PEM data...
	I0318 04:45:23.976041   18289 main.go:141] libmachine: Parsing certificate...
	I0318 04:45:23.976085   18289 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem
	I0318 04:45:23.976106   18289 main.go:141] libmachine: Decoding PEM data...
	I0318 04:45:23.976114   18289 main.go:141] libmachine: Parsing certificate...
	I0318 04:45:23.976536   18289 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18429-15072/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:45:24.116205   18289 main.go:141] libmachine: Creating SSH key...
	I0318 04:45:24.236950   18289 main.go:141] libmachine: Creating Disk image...
	I0318 04:45:24.236957   18289 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:45:24.237148   18289 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/enable-default-cni-360000/disk.qcow2.raw /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/enable-default-cni-360000/disk.qcow2
	I0318 04:45:24.249685   18289 main.go:141] libmachine: STDOUT: 
	I0318 04:45:24.249704   18289 main.go:141] libmachine: STDERR: 
	I0318 04:45:24.249759   18289 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/enable-default-cni-360000/disk.qcow2 +20000M
	I0318 04:45:24.260781   18289 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:45:24.260800   18289 main.go:141] libmachine: STDERR: 
	I0318 04:45:24.260817   18289 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/enable-default-cni-360000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/enable-default-cni-360000/disk.qcow2
	I0318 04:45:24.260822   18289 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:45:24.260856   18289 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/enable-default-cni-360000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/enable-default-cni-360000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/enable-default-cni-360000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:d7:5a:99:84:60 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/enable-default-cni-360000/disk.qcow2
	I0318 04:45:24.262657   18289 main.go:141] libmachine: STDOUT: 
	I0318 04:45:24.262670   18289 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:45:24.262691   18289 client.go:171] duration metric: took 286.753791ms to LocalClient.Create
	I0318 04:45:26.264747   18289 start.go:128] duration metric: took 2.314160917s to createHost
	I0318 04:45:26.264822   18289 start.go:83] releasing machines lock for "enable-default-cni-360000", held for 2.314279333s
	W0318 04:45:26.264863   18289 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:45:26.274026   18289 out.go:177] * Deleting "enable-default-cni-360000" in qemu2 ...
	W0318 04:45:26.295639   18289 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:45:26.295654   18289 start.go:728] Will try again in 5 seconds ...
	I0318 04:45:31.297718   18289 start.go:360] acquireMachinesLock for enable-default-cni-360000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:45:31.298188   18289 start.go:364] duration metric: took 339.417µs to acquireMachinesLock for "enable-default-cni-360000"
	I0318 04:45:31.298362   18289 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-360000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-360000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:45:31.298664   18289 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:45:31.304361   18289 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 04:45:31.351006   18289 start.go:159] libmachine.API.Create for "enable-default-cni-360000" (driver="qemu2")
	I0318 04:45:31.351057   18289 client.go:168] LocalClient.Create starting
	I0318 04:45:31.351175   18289 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem
	I0318 04:45:31.351248   18289 main.go:141] libmachine: Decoding PEM data...
	I0318 04:45:31.351271   18289 main.go:141] libmachine: Parsing certificate...
	I0318 04:45:31.351335   18289 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem
	I0318 04:45:31.351377   18289 main.go:141] libmachine: Decoding PEM data...
	I0318 04:45:31.351395   18289 main.go:141] libmachine: Parsing certificate...
	I0318 04:45:31.351917   18289 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18429-15072/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:45:31.501335   18289 main.go:141] libmachine: Creating SSH key...
	I0318 04:45:31.586607   18289 main.go:141] libmachine: Creating Disk image...
	I0318 04:45:31.586616   18289 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:45:31.586812   18289 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/enable-default-cni-360000/disk.qcow2.raw /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/enable-default-cni-360000/disk.qcow2
	I0318 04:45:31.599387   18289 main.go:141] libmachine: STDOUT: 
	I0318 04:45:31.599409   18289 main.go:141] libmachine: STDERR: 
	I0318 04:45:31.599486   18289 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/enable-default-cni-360000/disk.qcow2 +20000M
	I0318 04:45:31.610459   18289 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:45:31.610487   18289 main.go:141] libmachine: STDERR: 
	I0318 04:45:31.610498   18289 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/enable-default-cni-360000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/enable-default-cni-360000/disk.qcow2
	I0318 04:45:31.610505   18289 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:45:31.610536   18289 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/enable-default-cni-360000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/enable-default-cni-360000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/enable-default-cni-360000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:3e:16:78:5f:3e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/enable-default-cni-360000/disk.qcow2
	I0318 04:45:31.612354   18289 main.go:141] libmachine: STDOUT: 
	I0318 04:45:31.612370   18289 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:45:31.612384   18289 client.go:171] duration metric: took 261.328458ms to LocalClient.Create
	I0318 04:45:33.614535   18289 start.go:128] duration metric: took 2.315878708s to createHost
	I0318 04:45:33.614655   18289 start.go:83] releasing machines lock for "enable-default-cni-360000", held for 2.316464541s
	W0318 04:45:33.615033   18289 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-360000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-360000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:45:33.623752   18289 out.go:177] 
	W0318 04:45:33.629696   18289 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:45:33.629727   18289 out.go:239] * 
	* 
	W0318 04:45:33.632418   18289 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:45:33.641621   18289 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-360000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-360000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.972278834s)

                                                
                                                
-- stdout --
	* [flannel-360000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-360000" primary control-plane node in "flannel-360000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-360000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:45:35.956611   18399 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:45:35.956732   18399 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:45:35.956736   18399 out.go:304] Setting ErrFile to fd 2...
	I0318 04:45:35.956738   18399 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:45:35.956866   18399 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:45:35.957974   18399 out.go:298] Setting JSON to false
	I0318 04:45:35.974080   18399 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":9908,"bootTime":1710752427,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:45:35.974154   18399 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:45:35.979295   18399 out.go:177] * [flannel-360000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:45:35.991178   18399 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 04:45:35.987223   18399 notify.go:220] Checking for updates...
	I0318 04:45:35.997195   18399 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:45:36.001286   18399 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:45:36.005213   18399 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:45:36.008230   18399 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	I0318 04:45:36.015107   18399 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:45:36.019563   18399 config.go:182] Loaded profile config "multinode-969000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:45:36.019637   18399 config.go:182] Loaded profile config "stopped-upgrade-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 04:45:36.019683   18399 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:45:36.023239   18399 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 04:45:36.030242   18399 start.go:297] selected driver: qemu2
	I0318 04:45:36.030248   18399 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:45:36.030254   18399 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:45:36.032654   18399 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:45:36.037059   18399 out.go:177] * Automatically selected the socket_vmnet network
	I0318 04:45:36.040276   18399 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:45:36.040318   18399 cni.go:84] Creating CNI manager for "flannel"
	I0318 04:45:36.040325   18399 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0318 04:45:36.040356   18399 start.go:340] cluster config:
	{Name:flannel-360000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:flannel-360000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:45:36.044902   18399 iso.go:125] acquiring lock: {Name:mkb8143674083e0c7a46a3ed751b3800392bcd24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:45:36.053157   18399 out.go:177] * Starting "flannel-360000" primary control-plane node in "flannel-360000" cluster
	I0318 04:45:36.057187   18399 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:45:36.057202   18399 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:45:36.057213   18399 cache.go:56] Caching tarball of preloaded images
	I0318 04:45:36.057276   18399 preload.go:173] Found /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:45:36.057283   18399 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:45:36.057347   18399 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/flannel-360000/config.json ...
	I0318 04:45:36.057359   18399 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/flannel-360000/config.json: {Name:mk847ce0b6d6f8f3c5f63142b8f22d8fe2928ae2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:45:36.057602   18399 start.go:360] acquireMachinesLock for flannel-360000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:45:36.057653   18399 start.go:364] duration metric: took 44.834µs to acquireMachinesLock for "flannel-360000"
	I0318 04:45:36.057668   18399 start.go:93] Provisioning new machine with config: &{Name:flannel-360000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:flannel-360000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:45:36.057719   18399 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:45:36.065167   18399 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 04:45:36.082476   18399 start.go:159] libmachine.API.Create for "flannel-360000" (driver="qemu2")
	I0318 04:45:36.082514   18399 client.go:168] LocalClient.Create starting
	I0318 04:45:36.082595   18399 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem
	I0318 04:45:36.082629   18399 main.go:141] libmachine: Decoding PEM data...
	I0318 04:45:36.082639   18399 main.go:141] libmachine: Parsing certificate...
	I0318 04:45:36.082690   18399 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem
	I0318 04:45:36.082713   18399 main.go:141] libmachine: Decoding PEM data...
	I0318 04:45:36.082719   18399 main.go:141] libmachine: Parsing certificate...
	I0318 04:45:36.083081   18399 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18429-15072/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:45:36.225898   18399 main.go:141] libmachine: Creating SSH key...
	I0318 04:45:36.335048   18399 main.go:141] libmachine: Creating Disk image...
	I0318 04:45:36.335056   18399 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:45:36.335234   18399 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/flannel-360000/disk.qcow2.raw /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/flannel-360000/disk.qcow2
	I0318 04:45:36.347347   18399 main.go:141] libmachine: STDOUT: 
	I0318 04:45:36.347369   18399 main.go:141] libmachine: STDERR: 
	I0318 04:45:36.347433   18399 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/flannel-360000/disk.qcow2 +20000M
	I0318 04:45:36.358851   18399 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:45:36.358876   18399 main.go:141] libmachine: STDERR: 
	I0318 04:45:36.358892   18399 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/flannel-360000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/flannel-360000/disk.qcow2
	I0318 04:45:36.358898   18399 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:45:36.358934   18399 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/flannel-360000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/flannel-360000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/flannel-360000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:ef:d6:c3:88:b6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/flannel-360000/disk.qcow2
	I0318 04:45:36.360731   18399 main.go:141] libmachine: STDOUT: 
	I0318 04:45:36.360749   18399 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:45:36.360768   18399 client.go:171] duration metric: took 278.255625ms to LocalClient.Create
	I0318 04:45:38.362920   18399 start.go:128] duration metric: took 2.305256667s to createHost
	I0318 04:45:38.362994   18399 start.go:83] releasing machines lock for "flannel-360000", held for 2.305410041s
	W0318 04:45:38.363049   18399 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:45:38.373035   18399 out.go:177] * Deleting "flannel-360000" in qemu2 ...
	W0318 04:45:38.397689   18399 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:45:38.397718   18399 start.go:728] Will try again in 5 seconds ...
	I0318 04:45:43.399635   18399 start.go:360] acquireMachinesLock for flannel-360000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:45:43.399719   18399 start.go:364] duration metric: took 65.75µs to acquireMachinesLock for "flannel-360000"
	I0318 04:45:43.399728   18399 start.go:93] Provisioning new machine with config: &{Name:flannel-360000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:flannel-360000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:45:43.399774   18399 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:45:43.406996   18399 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 04:45:43.422899   18399 start.go:159] libmachine.API.Create for "flannel-360000" (driver="qemu2")
	I0318 04:45:43.422952   18399 client.go:168] LocalClient.Create starting
	I0318 04:45:43.423036   18399 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem
	I0318 04:45:43.423074   18399 main.go:141] libmachine: Decoding PEM data...
	I0318 04:45:43.423086   18399 main.go:141] libmachine: Parsing certificate...
	I0318 04:45:43.423135   18399 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem
	I0318 04:45:43.423156   18399 main.go:141] libmachine: Decoding PEM data...
	I0318 04:45:43.423164   18399 main.go:141] libmachine: Parsing certificate...
	I0318 04:45:43.423484   18399 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18429-15072/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:45:43.671968   18399 main.go:141] libmachine: Creating SSH key...
	I0318 04:45:43.825732   18399 main.go:141] libmachine: Creating Disk image...
	I0318 04:45:43.825742   18399 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:45:43.825936   18399 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/flannel-360000/disk.qcow2.raw /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/flannel-360000/disk.qcow2
	I0318 04:45:43.840812   18399 main.go:141] libmachine: STDOUT: 
	I0318 04:45:43.840843   18399 main.go:141] libmachine: STDERR: 
	I0318 04:45:43.840911   18399 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/flannel-360000/disk.qcow2 +20000M
	I0318 04:45:43.854550   18399 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:45:43.854571   18399 main.go:141] libmachine: STDERR: 
	I0318 04:45:43.854582   18399 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/flannel-360000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/flannel-360000/disk.qcow2
	I0318 04:45:43.854587   18399 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:45:43.854641   18399 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/flannel-360000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/flannel-360000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/flannel-360000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:11:75:47:05:7c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/flannel-360000/disk.qcow2
	I0318 04:45:43.856736   18399 main.go:141] libmachine: STDOUT: 
	I0318 04:45:43.856762   18399 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:45:43.856776   18399 client.go:171] duration metric: took 433.833ms to LocalClient.Create
	I0318 04:45:45.859061   18399 start.go:128] duration metric: took 2.459316042s to createHost
	I0318 04:45:45.859173   18399 start.go:83] releasing machines lock for "flannel-360000", held for 2.459526334s
	W0318 04:45:45.859539   18399 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-360000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-360000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:45:45.871367   18399 out.go:177] 
	W0318 04:45:45.874377   18399 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:45:45.874451   18399 out.go:239] * 
	* 
	W0318 04:45:45.877147   18399 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:45:45.888391   18399 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-360000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-360000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.716663084s)

                                                
                                                
-- stdout --
	* [bridge-360000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-360000" primary control-plane node in "bridge-360000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-360000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:45:48.384927   18527 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:45:48.385060   18527 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:45:48.385063   18527 out.go:304] Setting ErrFile to fd 2...
	I0318 04:45:48.385066   18527 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:45:48.385204   18527 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:45:48.386308   18527 out.go:298] Setting JSON to false
	I0318 04:45:48.402510   18527 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":9921,"bootTime":1710752427,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:45:48.402573   18527 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:45:48.408327   18527 out.go:177] * [bridge-360000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:45:48.416294   18527 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 04:45:48.416309   18527 notify.go:220] Checking for updates...
	I0318 04:45:48.423188   18527 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:45:48.426381   18527 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:45:48.429276   18527 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:45:48.432320   18527 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	I0318 04:45:48.435275   18527 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:45:48.438620   18527 config.go:182] Loaded profile config "multinode-969000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:45:48.438682   18527 config.go:182] Loaded profile config "stopped-upgrade-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 04:45:48.438740   18527 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:45:48.443138   18527 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 04:45:48.450287   18527 start.go:297] selected driver: qemu2
	I0318 04:45:48.450292   18527 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:45:48.450298   18527 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:45:48.452502   18527 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:45:48.455321   18527 out.go:177] * Automatically selected the socket_vmnet network
	I0318 04:45:48.458386   18527 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:45:48.458445   18527 cni.go:84] Creating CNI manager for "bridge"
	I0318 04:45:48.458450   18527 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 04:45:48.458486   18527 start.go:340] cluster config:
	{Name:bridge-360000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:bridge-360000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:45:48.462792   18527 iso.go:125] acquiring lock: {Name:mkb8143674083e0c7a46a3ed751b3800392bcd24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:45:48.471270   18527 out.go:177] * Starting "bridge-360000" primary control-plane node in "bridge-360000" cluster
	I0318 04:45:48.475220   18527 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:45:48.475233   18527 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:45:48.475243   18527 cache.go:56] Caching tarball of preloaded images
	I0318 04:45:48.475291   18527 preload.go:173] Found /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:45:48.475296   18527 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:45:48.475349   18527 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/bridge-360000/config.json ...
	I0318 04:45:48.475359   18527 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/bridge-360000/config.json: {Name:mkdcfedae68827923d20515d4a04bff134d75f39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:45:48.475582   18527 start.go:360] acquireMachinesLock for bridge-360000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:45:48.475613   18527 start.go:364] duration metric: took 25.417µs to acquireMachinesLock for "bridge-360000"
	I0318 04:45:48.475626   18527 start.go:93] Provisioning new machine with config: &{Name:bridge-360000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:bridge-360000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:45:48.475657   18527 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:45:48.484276   18527 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 04:45:48.501343   18527 start.go:159] libmachine.API.Create for "bridge-360000" (driver="qemu2")
	I0318 04:45:48.501374   18527 client.go:168] LocalClient.Create starting
	I0318 04:45:48.501441   18527 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem
	I0318 04:45:48.501475   18527 main.go:141] libmachine: Decoding PEM data...
	I0318 04:45:48.501483   18527 main.go:141] libmachine: Parsing certificate...
	I0318 04:45:48.501533   18527 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem
	I0318 04:45:48.501557   18527 main.go:141] libmachine: Decoding PEM data...
	I0318 04:45:48.501565   18527 main.go:141] libmachine: Parsing certificate...
	I0318 04:45:48.501933   18527 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18429-15072/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:45:48.645812   18527 main.go:141] libmachine: Creating SSH key...
	I0318 04:45:48.688362   18527 main.go:141] libmachine: Creating Disk image...
	I0318 04:45:48.688367   18527 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:45:48.688545   18527 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/bridge-360000/disk.qcow2.raw /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/bridge-360000/disk.qcow2
	I0318 04:45:48.700868   18527 main.go:141] libmachine: STDOUT: 
	I0318 04:45:48.700888   18527 main.go:141] libmachine: STDERR: 
	I0318 04:45:48.700941   18527 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/bridge-360000/disk.qcow2 +20000M
	I0318 04:45:48.711839   18527 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:45:48.711868   18527 main.go:141] libmachine: STDERR: 
	I0318 04:45:48.711885   18527 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/bridge-360000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/bridge-360000/disk.qcow2
	I0318 04:45:48.711889   18527 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:45:48.711918   18527 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/bridge-360000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/bridge-360000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/bridge-360000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:8f:9f:18:6f:4d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/bridge-360000/disk.qcow2
	I0318 04:45:48.713603   18527 main.go:141] libmachine: STDOUT: 
	I0318 04:45:48.713620   18527 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:45:48.713639   18527 client.go:171] duration metric: took 212.217542ms to LocalClient.Create
	I0318 04:45:50.715910   18527 start.go:128] duration metric: took 2.239823916s to createHost
	I0318 04:45:50.716024   18527 start.go:83] releasing machines lock for "bridge-360000", held for 2.240015084s
	W0318 04:45:50.716098   18527 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:45:50.723187   18527 out.go:177] * Deleting "bridge-360000" in qemu2 ...
	W0318 04:45:50.747458   18527 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:45:50.747500   18527 start.go:728] Will try again in 5 seconds ...
	I0318 04:45:55.750451   18527 start.go:360] acquireMachinesLock for bridge-360000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:45:55.750965   18527 start.go:364] duration metric: took 395.083µs to acquireMachinesLock for "bridge-360000"
	I0318 04:45:55.751108   18527 start.go:93] Provisioning new machine with config: &{Name:bridge-360000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.28.4 ClusterName:bridge-360000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:45:55.751411   18527 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:45:55.759074   18527 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 04:45:55.803229   18527 start.go:159] libmachine.API.Create for "bridge-360000" (driver="qemu2")
	I0318 04:45:55.803281   18527 client.go:168] LocalClient.Create starting
	I0318 04:45:55.803399   18527 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem
	I0318 04:45:55.803455   18527 main.go:141] libmachine: Decoding PEM data...
	I0318 04:45:55.803469   18527 main.go:141] libmachine: Parsing certificate...
	I0318 04:45:55.803545   18527 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem
	I0318 04:45:55.803595   18527 main.go:141] libmachine: Decoding PEM data...
	I0318 04:45:55.803606   18527 main.go:141] libmachine: Parsing certificate...
	I0318 04:45:55.804099   18527 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18429-15072/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:45:55.951498   18527 main.go:141] libmachine: Creating SSH key...
	I0318 04:45:56.009093   18527 main.go:141] libmachine: Creating Disk image...
	I0318 04:45:56.009100   18527 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:45:56.009303   18527 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/bridge-360000/disk.qcow2.raw /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/bridge-360000/disk.qcow2
	I0318 04:45:56.022182   18527 main.go:141] libmachine: STDOUT: 
	I0318 04:45:56.022207   18527 main.go:141] libmachine: STDERR: 
	I0318 04:45:56.022296   18527 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/bridge-360000/disk.qcow2 +20000M
	I0318 04:45:56.034230   18527 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:45:56.034251   18527 main.go:141] libmachine: STDERR: 
	I0318 04:45:56.034263   18527 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/bridge-360000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/bridge-360000/disk.qcow2
	I0318 04:45:56.034281   18527 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:45:56.034320   18527 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/bridge-360000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/bridge-360000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/bridge-360000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3e:d5:92:89:6b:21 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/bridge-360000/disk.qcow2
	I0318 04:45:56.036232   18527 main.go:141] libmachine: STDOUT: 
	I0318 04:45:56.036249   18527 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:45:56.036266   18527 client.go:171] duration metric: took 232.954833ms to LocalClient.Create
	I0318 04:45:58.038562   18527 start.go:128] duration metric: took 2.286913417s to createHost
	I0318 04:45:58.038600   18527 start.go:83] releasing machines lock for "bridge-360000", held for 2.287392542s
	W0318 04:45:58.038796   18527 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-360000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-360000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:45:58.046230   18527 out.go:177] 
	W0318 04:45:58.053188   18527 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:45:58.053197   18527 out.go:239] * 
	* 
	W0318 04:45:58.054084   18527 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:45:58.063171   18527 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-360000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-360000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.898122917s)

                                                
                                                
-- stdout --
	* [kubenet-360000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-360000" primary control-plane node in "kubenet-360000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-360000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:46:00.455993   18640 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:46:00.456106   18640 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:46:00.456110   18640 out.go:304] Setting ErrFile to fd 2...
	I0318 04:46:00.456115   18640 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:46:00.456239   18640 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:46:00.457457   18640 out.go:298] Setting JSON to false
	I0318 04:46:00.474521   18640 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":9933,"bootTime":1710752427,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:46:00.474585   18640 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:46:00.478851   18640 out.go:177] * [kubenet-360000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:46:00.486920   18640 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 04:46:00.490909   18640 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:46:00.487035   18640 notify.go:220] Checking for updates...
	I0318 04:46:00.498877   18640 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:46:00.501907   18640 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:46:00.504901   18640 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	I0318 04:46:00.507879   18640 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:46:00.511363   18640 config.go:182] Loaded profile config "multinode-969000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:46:00.511417   18640 config.go:182] Loaded profile config "stopped-upgrade-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 04:46:00.511481   18640 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:46:00.514982   18640 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 04:46:00.521950   18640 start.go:297] selected driver: qemu2
	I0318 04:46:00.521956   18640 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:46:00.521962   18640 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:46:00.524266   18640 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:46:00.528845   18640 out.go:177] * Automatically selected the socket_vmnet network
	I0318 04:46:00.531962   18640 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:46:00.531992   18640 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0318 04:46:00.532019   18640 start.go:340] cluster config:
	{Name:kubenet-360000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kubenet-360000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:46:00.536494   18640 iso.go:125] acquiring lock: {Name:mkb8143674083e0c7a46a3ed751b3800392bcd24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:46:00.546856   18640 out.go:177] * Starting "kubenet-360000" primary control-plane node in "kubenet-360000" cluster
	I0318 04:46:00.550883   18640 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:46:00.550898   18640 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:46:00.550909   18640 cache.go:56] Caching tarball of preloaded images
	I0318 04:46:00.550961   18640 preload.go:173] Found /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:46:00.550967   18640 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:46:00.551028   18640 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/kubenet-360000/config.json ...
	I0318 04:46:00.551039   18640 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/kubenet-360000/config.json: {Name:mkac314006c09764afdcdd259b2b2cb57269b9d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:46:00.551285   18640 start.go:360] acquireMachinesLock for kubenet-360000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:46:00.551325   18640 start.go:364] duration metric: took 34.042µs to acquireMachinesLock for "kubenet-360000"
	I0318 04:46:00.551338   18640 start.go:93] Provisioning new machine with config: &{Name:kubenet-360000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:kubenet-360000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:46:00.551366   18640 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:46:00.557879   18640 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 04:46:00.573330   18640 start.go:159] libmachine.API.Create for "kubenet-360000" (driver="qemu2")
	I0318 04:46:00.573365   18640 client.go:168] LocalClient.Create starting
	I0318 04:46:00.573426   18640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem
	I0318 04:46:00.573454   18640 main.go:141] libmachine: Decoding PEM data...
	I0318 04:46:00.573465   18640 main.go:141] libmachine: Parsing certificate...
	I0318 04:46:00.573515   18640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem
	I0318 04:46:00.573537   18640 main.go:141] libmachine: Decoding PEM data...
	I0318 04:46:00.573546   18640 main.go:141] libmachine: Parsing certificate...
	I0318 04:46:00.573902   18640 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18429-15072/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:46:00.725629   18640 main.go:141] libmachine: Creating SSH key...
	I0318 04:46:00.859348   18640 main.go:141] libmachine: Creating Disk image...
	I0318 04:46:00.859357   18640 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:46:00.859551   18640 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kubenet-360000/disk.qcow2.raw /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kubenet-360000/disk.qcow2
	I0318 04:46:00.872283   18640 main.go:141] libmachine: STDOUT: 
	I0318 04:46:00.872303   18640 main.go:141] libmachine: STDERR: 
	I0318 04:46:00.872365   18640 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kubenet-360000/disk.qcow2 +20000M
	I0318 04:46:00.883686   18640 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:46:00.883710   18640 main.go:141] libmachine: STDERR: 
	I0318 04:46:00.883724   18640 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kubenet-360000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kubenet-360000/disk.qcow2
	I0318 04:46:00.883732   18640 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:46:00.883760   18640 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kubenet-360000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kubenet-360000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kubenet-360000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:57:82:77:bb:e6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kubenet-360000/disk.qcow2
	I0318 04:46:00.885641   18640 main.go:141] libmachine: STDOUT: 
	I0318 04:46:00.885655   18640 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:46:00.885673   18640 client.go:171] duration metric: took 312.282125ms to LocalClient.Create
	I0318 04:46:02.888110   18640 start.go:128] duration metric: took 2.33655975s to createHost
	I0318 04:46:02.888208   18640 start.go:83] releasing machines lock for "kubenet-360000", held for 2.336730958s
	W0318 04:46:02.888294   18640 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:46:02.899181   18640 out.go:177] * Deleting "kubenet-360000" in qemu2 ...
	W0318 04:46:02.923957   18640 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:46:02.924062   18640 start.go:728] Will try again in 5 seconds ...
	I0318 04:46:07.926474   18640 start.go:360] acquireMachinesLock for kubenet-360000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:46:07.926839   18640 start.go:364] duration metric: took 272.708µs to acquireMachinesLock for "kubenet-360000"
	I0318 04:46:07.926960   18640 start.go:93] Provisioning new machine with config: &{Name:kubenet-360000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:kubenet-360000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:46:07.927140   18640 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:46:07.936786   18640 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 04:46:07.979561   18640 start.go:159] libmachine.API.Create for "kubenet-360000" (driver="qemu2")
	I0318 04:46:07.979618   18640 client.go:168] LocalClient.Create starting
	I0318 04:46:07.979732   18640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem
	I0318 04:46:07.979786   18640 main.go:141] libmachine: Decoding PEM data...
	I0318 04:46:07.979823   18640 main.go:141] libmachine: Parsing certificate...
	I0318 04:46:07.979904   18640 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem
	I0318 04:46:07.979946   18640 main.go:141] libmachine: Decoding PEM data...
	I0318 04:46:07.979961   18640 main.go:141] libmachine: Parsing certificate...
	I0318 04:46:07.980488   18640 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18429-15072/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:46:08.128340   18640 main.go:141] libmachine: Creating SSH key...
	I0318 04:46:08.258585   18640 main.go:141] libmachine: Creating Disk image...
	I0318 04:46:08.258593   18640 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:46:08.258785   18640 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kubenet-360000/disk.qcow2.raw /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kubenet-360000/disk.qcow2
	I0318 04:46:08.271497   18640 main.go:141] libmachine: STDOUT: 
	I0318 04:46:08.271519   18640 main.go:141] libmachine: STDERR: 
	I0318 04:46:08.271587   18640 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kubenet-360000/disk.qcow2 +20000M
	I0318 04:46:08.282757   18640 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:46:08.282778   18640 main.go:141] libmachine: STDERR: 
	I0318 04:46:08.282796   18640 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kubenet-360000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kubenet-360000/disk.qcow2
	I0318 04:46:08.282802   18640 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:46:08.282838   18640 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kubenet-360000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kubenet-360000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kubenet-360000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ca:2b:eb:a9:a9:52 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/kubenet-360000/disk.qcow2
	I0318 04:46:08.284638   18640 main.go:141] libmachine: STDOUT: 
	I0318 04:46:08.284660   18640 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:46:08.284673   18640 client.go:171] duration metric: took 305.0395ms to LocalClient.Create
	I0318 04:46:10.286820   18640 start.go:128] duration metric: took 2.35959975s to createHost
	I0318 04:46:10.286878   18640 start.go:83] releasing machines lock for "kubenet-360000", held for 2.35995525s
	W0318 04:46:10.286997   18640 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-360000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-360000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:46:10.296220   18640 out.go:177] 
	W0318 04:46:10.299318   18640 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:46:10.299331   18640 out.go:239] * 
	* 
	W0318 04:46:10.299908   18640 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:46:10.313231   18640 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (10.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-421000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-421000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.966737459s)

                                                
                                                
-- stdout --
	* [old-k8s-version-421000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-421000" primary control-plane node in "old-k8s-version-421000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-421000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:46:12.581756   18753 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:46:12.581881   18753 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:46:12.581884   18753 out.go:304] Setting ErrFile to fd 2...
	I0318 04:46:12.581886   18753 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:46:12.582011   18753 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:46:12.583071   18753 out.go:298] Setting JSON to false
	I0318 04:46:12.599337   18753 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":9945,"bootTime":1710752427,"procs":483,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:46:12.599410   18753 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:46:12.606173   18753 out.go:177] * [old-k8s-version-421000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:46:12.613269   18753 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 04:46:12.618168   18753 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:46:12.613333   18753 notify.go:220] Checking for updates...
	I0318 04:46:12.625175   18753 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:46:12.628201   18753 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:46:12.631160   18753 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	I0318 04:46:12.634180   18753 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:46:12.642560   18753 config.go:182] Loaded profile config "multinode-969000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:46:12.642630   18753 config.go:182] Loaded profile config "stopped-upgrade-126000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0318 04:46:12.642673   18753 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:46:12.647078   18753 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 04:46:12.654183   18753 start.go:297] selected driver: qemu2
	I0318 04:46:12.654188   18753 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:46:12.654193   18753 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:46:12.656486   18753 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:46:12.660157   18753 out.go:177] * Automatically selected the socket_vmnet network
	I0318 04:46:12.664303   18753 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:46:12.664354   18753 cni.go:84] Creating CNI manager for ""
	I0318 04:46:12.664362   18753 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0318 04:46:12.664392   18753 start.go:340] cluster config:
	{Name:old-k8s-version-421000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-421000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:46:12.668949   18753 iso.go:125] acquiring lock: {Name:mkb8143674083e0c7a46a3ed751b3800392bcd24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:46:12.677144   18753 out.go:177] * Starting "old-k8s-version-421000" primary control-plane node in "old-k8s-version-421000" cluster
	I0318 04:46:12.681203   18753 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0318 04:46:12.681220   18753 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0318 04:46:12.681232   18753 cache.go:56] Caching tarball of preloaded images
	I0318 04:46:12.681320   18753 preload.go:173] Found /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:46:12.681326   18753 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0318 04:46:12.681400   18753 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/old-k8s-version-421000/config.json ...
	I0318 04:46:12.681418   18753 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/old-k8s-version-421000/config.json: {Name:mk72389f7e556b4d5e3aa28ba625a19aece0ea8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:46:12.681718   18753 start.go:360] acquireMachinesLock for old-k8s-version-421000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:46:12.681755   18753 start.go:364] duration metric: took 28.375µs to acquireMachinesLock for "old-k8s-version-421000"
	I0318 04:46:12.681769   18753 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-421000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-421000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:46:12.681799   18753 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:46:12.690219   18753 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 04:46:12.707940   18753 start.go:159] libmachine.API.Create for "old-k8s-version-421000" (driver="qemu2")
	I0318 04:46:12.707975   18753 client.go:168] LocalClient.Create starting
	I0318 04:46:12.708036   18753 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem
	I0318 04:46:12.708071   18753 main.go:141] libmachine: Decoding PEM data...
	I0318 04:46:12.708088   18753 main.go:141] libmachine: Parsing certificate...
	I0318 04:46:12.708133   18753 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem
	I0318 04:46:12.708155   18753 main.go:141] libmachine: Decoding PEM data...
	I0318 04:46:12.708164   18753 main.go:141] libmachine: Parsing certificate...
	I0318 04:46:12.708523   18753 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18429-15072/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:46:12.846836   18753 main.go:141] libmachine: Creating SSH key...
	I0318 04:46:12.997715   18753 main.go:141] libmachine: Creating Disk image...
	I0318 04:46:12.997727   18753 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:46:12.997919   18753 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/old-k8s-version-421000/disk.qcow2.raw /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/old-k8s-version-421000/disk.qcow2
	I0318 04:46:13.010782   18753 main.go:141] libmachine: STDOUT: 
	I0318 04:46:13.010806   18753 main.go:141] libmachine: STDERR: 
	I0318 04:46:13.010871   18753 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/old-k8s-version-421000/disk.qcow2 +20000M
	I0318 04:46:13.021724   18753 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:46:13.021743   18753 main.go:141] libmachine: STDERR: 
	I0318 04:46:13.021776   18753 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/old-k8s-version-421000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/old-k8s-version-421000/disk.qcow2
	I0318 04:46:13.021781   18753 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:46:13.021812   18753 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/old-k8s-version-421000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/old-k8s-version-421000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/old-k8s-version-421000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:4a:de:c2:ac:e1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/old-k8s-version-421000/disk.qcow2
	I0318 04:46:13.023659   18753 main.go:141] libmachine: STDOUT: 
	I0318 04:46:13.023676   18753 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:46:13.023695   18753 client.go:171] duration metric: took 315.710958ms to LocalClient.Create
	I0318 04:46:15.025305   18753 start.go:128] duration metric: took 2.3434525s to createHost
	I0318 04:46:15.025440   18753 start.go:83] releasing machines lock for "old-k8s-version-421000", held for 2.343649667s
	W0318 04:46:15.025516   18753 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:46:15.041895   18753 out.go:177] * Deleting "old-k8s-version-421000" in qemu2 ...
	W0318 04:46:15.063911   18753 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:46:15.063941   18753 start.go:728] Will try again in 5 seconds ...
	I0318 04:46:20.064861   18753 start.go:360] acquireMachinesLock for old-k8s-version-421000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:46:20.065289   18753 start.go:364] duration metric: took 289.542µs to acquireMachinesLock for "old-k8s-version-421000"
	I0318 04:46:20.065430   18753 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-421000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-421000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:46:20.065727   18753 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:46:20.075310   18753 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 04:46:20.123924   18753 start.go:159] libmachine.API.Create for "old-k8s-version-421000" (driver="qemu2")
	I0318 04:46:20.123978   18753 client.go:168] LocalClient.Create starting
	I0318 04:46:20.124093   18753 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem
	I0318 04:46:20.124156   18753 main.go:141] libmachine: Decoding PEM data...
	I0318 04:46:20.124177   18753 main.go:141] libmachine: Parsing certificate...
	I0318 04:46:20.124235   18753 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem
	I0318 04:46:20.124276   18753 main.go:141] libmachine: Decoding PEM data...
	I0318 04:46:20.124288   18753 main.go:141] libmachine: Parsing certificate...
	I0318 04:46:20.124774   18753 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18429-15072/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:46:20.274217   18753 main.go:141] libmachine: Creating SSH key...
	I0318 04:46:20.445679   18753 main.go:141] libmachine: Creating Disk image...
	I0318 04:46:20.445686   18753 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:46:20.445872   18753 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/old-k8s-version-421000/disk.qcow2.raw /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/old-k8s-version-421000/disk.qcow2
	I0318 04:46:20.458542   18753 main.go:141] libmachine: STDOUT: 
	I0318 04:46:20.458564   18753 main.go:141] libmachine: STDERR: 
	I0318 04:46:20.458622   18753 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/old-k8s-version-421000/disk.qcow2 +20000M
	I0318 04:46:20.469636   18753 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:46:20.469656   18753 main.go:141] libmachine: STDERR: 
	I0318 04:46:20.469672   18753 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/old-k8s-version-421000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/old-k8s-version-421000/disk.qcow2
	I0318 04:46:20.469676   18753 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:46:20.469708   18753 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/old-k8s-version-421000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/old-k8s-version-421000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/old-k8s-version-421000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:95:16:9e:57:59 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/old-k8s-version-421000/disk.qcow2
	I0318 04:46:20.471488   18753 main.go:141] libmachine: STDOUT: 
	I0318 04:46:20.471506   18753 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:46:20.471520   18753 client.go:171] duration metric: took 347.538584ms to LocalClient.Create
	I0318 04:46:22.473704   18753 start.go:128] duration metric: took 2.407948375s to createHost
	I0318 04:46:22.473832   18753 start.go:83] releasing machines lock for "old-k8s-version-421000", held for 2.408528041s
	W0318 04:46:22.474258   18753 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-421000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-421000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:46:22.484850   18753 out.go:177] 
	W0318 04:46:22.490090   18753 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:46:22.490131   18753 out.go:239] * 
	* 
	W0318 04:46:22.492798   18753 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:46:22.502690   18753 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-421000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-421000 -n old-k8s-version-421000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-421000 -n old-k8s-version-421000: exit status 7 (67.737958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-421000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (10.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-204000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-204000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2: exit status 80 (9.826212916s)

                                                
                                                
-- stdout --
	* [no-preload-204000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-204000" primary control-plane node in "no-preload-204000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-204000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:46:16.284001   18767 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:46:16.284117   18767 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:46:16.284121   18767 out.go:304] Setting ErrFile to fd 2...
	I0318 04:46:16.284123   18767 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:46:16.284239   18767 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:46:16.285293   18767 out.go:298] Setting JSON to false
	I0318 04:46:16.301612   18767 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":9949,"bootTime":1710752427,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:46:16.301676   18767 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:46:16.305846   18767 out.go:177] * [no-preload-204000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:46:16.313883   18767 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 04:46:16.317871   18767 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:46:16.313939   18767 notify.go:220] Checking for updates...
	I0318 04:46:16.323811   18767 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:46:16.326883   18767 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:46:16.333770   18767 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	I0318 04:46:16.340846   18767 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:46:16.345162   18767 config.go:182] Loaded profile config "multinode-969000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:46:16.345240   18767 config.go:182] Loaded profile config "old-k8s-version-421000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0318 04:46:16.345287   18767 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:46:16.349790   18767 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 04:46:16.356741   18767 start.go:297] selected driver: qemu2
	I0318 04:46:16.356747   18767 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:46:16.356752   18767 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:46:16.358987   18767 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:46:16.361864   18767 out.go:177] * Automatically selected the socket_vmnet network
	I0318 04:46:16.364959   18767 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:46:16.365008   18767 cni.go:84] Creating CNI manager for ""
	I0318 04:46:16.365017   18767 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:46:16.365022   18767 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 04:46:16.365068   18767 start.go:340] cluster config:
	{Name:no-preload-204000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-204000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/
bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:46:16.370035   18767 iso.go:125] acquiring lock: {Name:mkb8143674083e0c7a46a3ed751b3800392bcd24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:46:16.378828   18767 out.go:177] * Starting "no-preload-204000" primary control-plane node in "no-preload-204000" cluster
	I0318 04:46:16.382709   18767 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0318 04:46:16.382802   18767 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/no-preload-204000/config.json ...
	I0318 04:46:16.382820   18767 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/no-preload-204000/config.json: {Name:mkd8090b4bbc02e430bd7b91374d1223aa8cb8a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:46:16.382841   18767 cache.go:107] acquiring lock: {Name:mk368de4369b4269f4f86d0406c895e179ee8d50 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:46:16.382852   18767 cache.go:107] acquiring lock: {Name:mk378f2696937ce1e0284a473e5e9a28e6f278ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:46:16.382873   18767 cache.go:107] acquiring lock: {Name:mk33f00f12d012d527631c59bc48ecedc29de51f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:46:16.382932   18767 cache.go:115] /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0318 04:46:16.382944   18767 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 109.584µs
	I0318 04:46:16.382954   18767 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0318 04:46:16.382964   18767 cache.go:107] acquiring lock: {Name:mkb9e7c49376117208d088ffd485c1cef375d580 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:46:16.382981   18767 cache.go:107] acquiring lock: {Name:mk5c6c4938ee3453e8aff80197caae9d3ccb88b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:46:16.383033   18767 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 04:46:16.383083   18767 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 04:46:16.383105   18767 cache.go:107] acquiring lock: {Name:mk64f113561474ac439f58609cdaa7ea452ce3a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:46:16.383142   18767 cache.go:107] acquiring lock: {Name:mkcda01548dc62d8a734f989cfb13404c3ab5d68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:46:16.383218   18767 cache.go:107] acquiring lock: {Name:mkd85cca51ead323a9dae13b1686e65efd820b93 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:46:16.383266   18767 start.go:360] acquireMachinesLock for no-preload-204000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:46:16.383296   18767 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0318 04:46:16.383313   18767 start.go:364] duration metric: took 37.625µs to acquireMachinesLock for "no-preload-204000"
	I0318 04:46:16.383325   18767 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0318 04:46:16.383365   18767 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 04:46:16.383330   18767 start.go:93] Provisioning new machine with config: &{Name:no-preload-204000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-204000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:46:16.383412   18767 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:46:16.391773   18767 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 04:46:16.383508   18767 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 04:46:16.383576   18767 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 04:46:16.397252   18767 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0318 04:46:16.397511   18767 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 04:46:16.402747   18767 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 04:46:16.402881   18767 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0318 04:46:16.402887   18767 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 04:46:16.402976   18767 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 04:46:16.403113   18767 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 04:46:16.410903   18767 start.go:159] libmachine.API.Create for "no-preload-204000" (driver="qemu2")
	I0318 04:46:16.410925   18767 client.go:168] LocalClient.Create starting
	I0318 04:46:16.410989   18767 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem
	I0318 04:46:16.411023   18767 main.go:141] libmachine: Decoding PEM data...
	I0318 04:46:16.411039   18767 main.go:141] libmachine: Parsing certificate...
	I0318 04:46:16.411087   18767 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem
	I0318 04:46:16.411112   18767 main.go:141] libmachine: Decoding PEM data...
	I0318 04:46:16.411119   18767 main.go:141] libmachine: Parsing certificate...
	I0318 04:46:16.411453   18767 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18429-15072/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:46:16.562398   18767 main.go:141] libmachine: Creating SSH key...
	I0318 04:46:16.619319   18767 main.go:141] libmachine: Creating Disk image...
	I0318 04:46:16.619343   18767 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:46:16.619525   18767 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/no-preload-204000/disk.qcow2.raw /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/no-preload-204000/disk.qcow2
	I0318 04:46:16.632727   18767 main.go:141] libmachine: STDOUT: 
	I0318 04:46:16.632744   18767 main.go:141] libmachine: STDERR: 
	I0318 04:46:16.632787   18767 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/no-preload-204000/disk.qcow2 +20000M
	I0318 04:46:16.645032   18767 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:46:16.645051   18767 main.go:141] libmachine: STDERR: 
	I0318 04:46:16.645075   18767 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/no-preload-204000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/no-preload-204000/disk.qcow2
	I0318 04:46:16.645080   18767 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:46:16.645111   18767 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/no-preload-204000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/no-preload-204000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/no-preload-204000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:af:dd:de:4c:78 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/no-preload-204000/disk.qcow2
	I0318 04:46:16.647101   18767 main.go:141] libmachine: STDOUT: 
	I0318 04:46:16.647130   18767 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:46:16.647159   18767 client.go:171] duration metric: took 236.223417ms to LocalClient.Create
	I0318 04:46:18.649422   18767 start.go:128] duration metric: took 2.265982875s to createHost
	I0318 04:46:18.649508   18767 start.go:83] releasing machines lock for "no-preload-204000", held for 2.266181083s
	W0318 04:46:18.649591   18767 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:46:18.659506   18767 out.go:177] * Deleting "no-preload-204000" in qemu2 ...
	W0318 04:46:18.684115   18767 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:46:18.684143   18767 start.go:728] Will try again in 5 seconds ...
	I0318 04:46:18.849351   18767 cache.go:162] opening:  /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0318 04:46:18.903607   18767 cache.go:162] opening:  /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0318 04:46:18.948097   18767 cache.go:162] opening:  /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0
	I0318 04:46:18.985124   18767 cache.go:162] opening:  /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0318 04:46:18.991068   18767 cache.go:162] opening:  /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0318 04:46:18.992403   18767 cache.go:162] opening:  /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0318 04:46:18.996933   18767 cache.go:162] opening:  /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0318 04:46:19.146725   18767 cache.go:157] /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0318 04:46:19.146783   18767 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 2.763725417s
	I0318 04:46:19.146808   18767 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0318 04:46:21.817065   18767 cache.go:157] /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 exists
	I0318 04:46:21.817134   18767 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" took 5.434034375s
	I0318 04:46:21.817163   18767 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 succeeded
	I0318 04:46:21.933707   18767 cache.go:157] /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0318 04:46:21.933756   18767 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 5.550777959s
	I0318 04:46:21.933780   18767 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0318 04:46:22.546851   18767 cache.go:157] /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 exists
	I0318 04:46:22.546880   18767 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" took 6.164036084s
	I0318 04:46:22.546899   18767 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 succeeded
	I0318 04:46:23.013244   18767 cache.go:157] /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 exists
	I0318 04:46:23.013263   18767 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2" took 6.630084333s
	I0318 04:46:23.013276   18767 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 succeeded
	I0318 04:46:23.684322   18767 start.go:360] acquireMachinesLock for no-preload-204000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:46:23.684622   18767 start.go:364] duration metric: took 236.542µs to acquireMachinesLock for "no-preload-204000"
	I0318 04:46:23.684786   18767 start.go:93] Provisioning new machine with config: &{Name:no-preload-204000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-204000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:46:23.685067   18767 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:46:23.695779   18767 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 04:46:23.745949   18767 start.go:159] libmachine.API.Create for "no-preload-204000" (driver="qemu2")
	I0318 04:46:23.746005   18767 client.go:168] LocalClient.Create starting
	I0318 04:46:23.746114   18767 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem
	I0318 04:46:23.746187   18767 main.go:141] libmachine: Decoding PEM data...
	I0318 04:46:23.746206   18767 main.go:141] libmachine: Parsing certificate...
	I0318 04:46:23.746284   18767 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem
	I0318 04:46:23.746312   18767 main.go:141] libmachine: Decoding PEM data...
	I0318 04:46:23.746323   18767 main.go:141] libmachine: Parsing certificate...
	I0318 04:46:23.746845   18767 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18429-15072/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:46:23.899032   18767 main.go:141] libmachine: Creating SSH key...
	I0318 04:46:24.004811   18767 main.go:141] libmachine: Creating Disk image...
	I0318 04:46:24.004817   18767 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:46:24.005001   18767 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/no-preload-204000/disk.qcow2.raw /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/no-preload-204000/disk.qcow2
	I0318 04:46:24.017702   18767 main.go:141] libmachine: STDOUT: 
	I0318 04:46:24.017726   18767 main.go:141] libmachine: STDERR: 
	I0318 04:46:24.017777   18767 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/no-preload-204000/disk.qcow2 +20000M
	I0318 04:46:24.028764   18767 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:46:24.028793   18767 main.go:141] libmachine: STDERR: 
	I0318 04:46:24.028812   18767 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/no-preload-204000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/no-preload-204000/disk.qcow2
	I0318 04:46:24.028815   18767 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:46:24.028853   18767 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/no-preload-204000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/no-preload-204000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/no-preload-204000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:c3:71:08:b9:58 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/no-preload-204000/disk.qcow2
	I0318 04:46:24.030887   18767 main.go:141] libmachine: STDOUT: 
	I0318 04:46:24.030907   18767 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:46:24.030922   18767 client.go:171] duration metric: took 284.914125ms to LocalClient.Create
	I0318 04:46:24.099013   18767 cache.go:157] /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 exists
	I0318 04:46:24.099030   18767 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" took 7.716223125s
	I0318 04:46:24.099039   18767 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 succeeded
	I0318 04:46:26.031290   18767 start.go:128] duration metric: took 2.346159375s to createHost
	I0318 04:46:26.031389   18767 start.go:83] releasing machines lock for "no-preload-204000", held for 2.346767917s
	W0318 04:46:26.031667   18767 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-204000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-204000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:46:26.048448   18767 out.go:177] 
	W0318 04:46:26.053447   18767 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:46:26.053472   18767 out.go:239] * 
	* 
	W0318 04:46:26.055062   18767 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:46:26.062220   18767 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-204000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-204000 -n no-preload-204000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-204000 -n no-preload-204000: exit status 7 (57.196166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-204000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-421000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-421000 create -f testdata/busybox.yaml: exit status 1 (29.410916ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-421000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-421000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-421000 -n old-k8s-version-421000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-421000 -n old-k8s-version-421000: exit status 7 (30.689041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-421000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-421000 -n old-k8s-version-421000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-421000 -n old-k8s-version-421000: exit status 7 (30.847125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-421000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-421000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-421000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-421000 describe deploy/metrics-server -n kube-system: exit status 1 (26.316333ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-421000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-421000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-421000 -n old-k8s-version-421000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-421000 -n old-k8s-version-421000: exit status 7 (31.186666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-421000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-204000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-204000 create -f testdata/busybox.yaml: exit status 1 (28.327625ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-204000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-204000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-204000 -n no-preload-204000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-204000 -n no-preload-204000: exit status 7 (30.009625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-204000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-204000 -n no-preload-204000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-204000 -n no-preload-204000: exit status 7 (38.00325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-204000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-204000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-204000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-204000 describe deploy/metrics-server -n kube-system: exit status 1 (29.117458ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-204000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-204000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-204000 -n no-preload-204000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-204000 -n no-preload-204000: exit status 7 (31.533084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-204000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-421000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-421000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.20111575s)

                                                
                                                
-- stdout --
	* [old-k8s-version-421000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-421000" primary control-plane node in "old-k8s-version-421000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-421000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-421000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:46:26.384255   18859 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:46:26.384380   18859 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:46:26.384383   18859 out.go:304] Setting ErrFile to fd 2...
	I0318 04:46:26.384385   18859 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:46:26.384520   18859 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:46:26.385570   18859 out.go:298] Setting JSON to false
	I0318 04:46:26.402282   18859 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":9959,"bootTime":1710752427,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:46:26.402343   18859 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:46:26.405755   18859 out.go:177] * [old-k8s-version-421000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:46:26.416843   18859 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 04:46:26.412959   18859 notify.go:220] Checking for updates...
	I0318 04:46:26.424886   18859 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:46:26.427821   18859 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:46:26.432043   18859 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:46:26.435641   18859 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	I0318 04:46:26.438911   18859 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:46:26.442153   18859 config.go:182] Loaded profile config "old-k8s-version-421000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0318 04:46:26.444913   18859 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0318 04:46:26.447858   18859 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:46:26.451930   18859 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 04:46:26.458850   18859 start.go:297] selected driver: qemu2
	I0318 04:46:26.458857   18859 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-421000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-421000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:46:26.458932   18859 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:46:26.461451   18859 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:46:26.461504   18859 cni.go:84] Creating CNI manager for ""
	I0318 04:46:26.461511   18859 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0318 04:46:26.461537   18859 start.go:340] cluster config:
	{Name:old-k8s-version-421000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-421000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:46:26.466092   18859 iso.go:125] acquiring lock: {Name:mkb8143674083e0c7a46a3ed751b3800392bcd24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:46:26.473871   18859 out.go:177] * Starting "old-k8s-version-421000" primary control-plane node in "old-k8s-version-421000" cluster
	I0318 04:46:26.478988   18859 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0318 04:46:26.479005   18859 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0318 04:46:26.479019   18859 cache.go:56] Caching tarball of preloaded images
	I0318 04:46:26.479085   18859 preload.go:173] Found /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:46:26.479091   18859 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0318 04:46:26.479153   18859 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/old-k8s-version-421000/config.json ...
	I0318 04:46:26.479709   18859 start.go:360] acquireMachinesLock for old-k8s-version-421000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:46:26.479737   18859 start.go:364] duration metric: took 21.334µs to acquireMachinesLock for "old-k8s-version-421000"
	I0318 04:46:26.479746   18859 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:46:26.479752   18859 fix.go:54] fixHost starting: 
	I0318 04:46:26.479878   18859 fix.go:112] recreateIfNeeded on old-k8s-version-421000: state=Stopped err=<nil>
	W0318 04:46:26.479887   18859 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:46:26.482860   18859 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-421000" ...
	I0318 04:46:26.489887   18859 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/old-k8s-version-421000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/old-k8s-version-421000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/old-k8s-version-421000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:95:16:9e:57:59 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/old-k8s-version-421000/disk.qcow2
	I0318 04:46:26.492036   18859 main.go:141] libmachine: STDOUT: 
	I0318 04:46:26.492060   18859 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:46:26.492092   18859 fix.go:56] duration metric: took 12.340541ms for fixHost
	I0318 04:46:26.492097   18859 start.go:83] releasing machines lock for "old-k8s-version-421000", held for 12.356042ms
	W0318 04:46:26.492106   18859 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:46:26.492145   18859 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:46:26.492150   18859 start.go:728] Will try again in 5 seconds ...
	I0318 04:46:31.494161   18859 start.go:360] acquireMachinesLock for old-k8s-version-421000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:46:31.494557   18859 start.go:364] duration metric: took 305.458µs to acquireMachinesLock for "old-k8s-version-421000"
	I0318 04:46:31.494679   18859 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:46:31.494702   18859 fix.go:54] fixHost starting: 
	I0318 04:46:31.495453   18859 fix.go:112] recreateIfNeeded on old-k8s-version-421000: state=Stopped err=<nil>
	W0318 04:46:31.495481   18859 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:46:31.500865   18859 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-421000" ...
	I0318 04:46:31.506019   18859 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/old-k8s-version-421000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/old-k8s-version-421000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/old-k8s-version-421000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:95:16:9e:57:59 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/old-k8s-version-421000/disk.qcow2
	I0318 04:46:31.516231   18859 main.go:141] libmachine: STDOUT: 
	I0318 04:46:31.516291   18859 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:46:31.516381   18859 fix.go:56] duration metric: took 21.683666ms for fixHost
	I0318 04:46:31.516397   18859 start.go:83] releasing machines lock for "old-k8s-version-421000", held for 21.8175ms
	W0318 04:46:31.516653   18859 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-421000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-421000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:46:31.524814   18859 out.go:177] 
	W0318 04:46:31.527910   18859 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:46:31.527955   18859 out.go:239] * 
	* 
	W0318 04:46:31.530548   18859 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:46:31.538672   18859 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-421000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-421000 -n old-k8s-version-421000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-421000 -n old-k8s-version-421000: exit status 7 (68.286542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-421000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-204000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-204000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2: exit status 80 (5.211086375s)

                                                
                                                
-- stdout --
	* [no-preload-204000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-204000" primary control-plane node in "no-preload-204000" cluster
	* Restarting existing qemu2 VM for "no-preload-204000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-204000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:46:29.930632   18888 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:46:29.930782   18888 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:46:29.930785   18888 out.go:304] Setting ErrFile to fd 2...
	I0318 04:46:29.930788   18888 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:46:29.930919   18888 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:46:29.931882   18888 out.go:298] Setting JSON to false
	I0318 04:46:29.948038   18888 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":9962,"bootTime":1710752427,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:46:29.948093   18888 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:46:29.953370   18888 out.go:177] * [no-preload-204000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:46:29.960359   18888 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 04:46:29.964342   18888 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:46:29.960407   18888 notify.go:220] Checking for updates...
	I0318 04:46:29.968373   18888 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:46:29.971403   18888 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:46:29.975333   18888 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	I0318 04:46:29.986141   18888 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:46:29.989688   18888 config.go:182] Loaded profile config "no-preload-204000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0318 04:46:29.989968   18888 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:46:29.994328   18888 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 04:46:30.001339   18888 start.go:297] selected driver: qemu2
	I0318 04:46:30.001344   18888 start.go:901] validating driver "qemu2" against &{Name:no-preload-204000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-204000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:46:30.001403   18888 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:46:30.003849   18888 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:46:30.003901   18888 cni.go:84] Creating CNI manager for ""
	I0318 04:46:30.003909   18888 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:46:30.003940   18888 start.go:340] cluster config:
	{Name:no-preload-204000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-204000 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host M
ount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:46:30.008575   18888 iso.go:125] acquiring lock: {Name:mkb8143674083e0c7a46a3ed751b3800392bcd24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:46:30.017381   18888 out.go:177] * Starting "no-preload-204000" primary control-plane node in "no-preload-204000" cluster
	I0318 04:46:30.021301   18888 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0318 04:46:30.021372   18888 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/no-preload-204000/config.json ...
	I0318 04:46:30.021397   18888 cache.go:107] acquiring lock: {Name:mk368de4369b4269f4f86d0406c895e179ee8d50 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:46:30.021432   18888 cache.go:107] acquiring lock: {Name:mk378f2696937ce1e0284a473e5e9a28e6f278ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:46:30.021429   18888 cache.go:107] acquiring lock: {Name:mkd85cca51ead323a9dae13b1686e65efd820b93 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:46:30.021455   18888 cache.go:115] /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0318 04:46:30.021464   18888 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 69.625µs
	I0318 04:46:30.021471   18888 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0318 04:46:30.021467   18888 cache.go:107] acquiring lock: {Name:mk33f00f12d012d527631c59bc48ecedc29de51f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:46:30.021478   18888 cache.go:107] acquiring lock: {Name:mk5c6c4938ee3453e8aff80197caae9d3ccb88b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:46:30.021513   18888 cache.go:115] /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 exists
	I0318 04:46:30.021521   18888 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" took 120.125µs
	I0318 04:46:30.021526   18888 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 succeeded
	I0318 04:46:30.021537   18888 cache.go:115] /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0318 04:46:30.021529   18888 cache.go:115] /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 exists
	I0318 04:46:30.021541   18888 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 63.542µs
	I0318 04:46:30.021546   18888 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0318 04:46:30.021546   18888 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" took 79.25µs
	I0318 04:46:30.021538   18888 cache.go:107] acquiring lock: {Name:mk64f113561474ac439f58609cdaa7ea452ce3a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:46:30.021551   18888 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 succeeded
	I0318 04:46:30.021591   18888 cache.go:115] /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0318 04:46:30.021597   18888 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 59.5µs
	I0318 04:46:30.021600   18888 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0318 04:46:30.021616   18888 cache.go:107] acquiring lock: {Name:mkb9e7c49376117208d088ffd485c1cef375d580 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:46:30.021618   18888 cache.go:115] /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 exists
	I0318 04:46:30.021631   18888 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2" took 215.833µs
	I0318 04:46:30.021644   18888 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 succeeded
	I0318 04:46:30.021657   18888 cache.go:107] acquiring lock: {Name:mkcda01548dc62d8a734f989cfb13404c3ab5d68 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:46:30.021699   18888 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0318 04:46:30.021703   18888 cache.go:115] /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 exists
	I0318 04:46:30.021726   18888 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" -> "/Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" took 156.792µs
	I0318 04:46:30.021733   18888 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.2 -> /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 succeeded
	I0318 04:46:30.021768   18888 start.go:360] acquireMachinesLock for no-preload-204000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:46:30.021802   18888 start.go:364] duration metric: took 26.458µs to acquireMachinesLock for "no-preload-204000"
	I0318 04:46:30.021815   18888 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:46:30.021821   18888 fix.go:54] fixHost starting: 
	I0318 04:46:30.021947   18888 fix.go:112] recreateIfNeeded on no-preload-204000: state=Stopped err=<nil>
	W0318 04:46:30.021957   18888 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:46:30.033314   18888 out.go:177] * Restarting existing qemu2 VM for "no-preload-204000" ...
	I0318 04:46:30.037420   18888 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/no-preload-204000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/no-preload-204000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/no-preload-204000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:c3:71:08:b9:58 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/no-preload-204000/disk.qcow2
	I0318 04:46:30.038065   18888 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0318 04:46:30.040003   18888 main.go:141] libmachine: STDOUT: 
	I0318 04:46:30.040031   18888 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:46:30.040075   18888 fix.go:56] duration metric: took 18.25475ms for fixHost
	I0318 04:46:30.040080   18888 start.go:83] releasing machines lock for "no-preload-204000", held for 18.271333ms
	W0318 04:46:30.040089   18888 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:46:30.040125   18888 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:46:30.040131   18888 start.go:728] Will try again in 5 seconds ...
	I0318 04:46:32.006267   18888 cache.go:162] opening:  /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.10-0
	I0318 04:46:35.040329   18888 start.go:360] acquireMachinesLock for no-preload-204000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:46:35.040708   18888 start.go:364] duration metric: took 292.292µs to acquireMachinesLock for "no-preload-204000"
	I0318 04:46:35.040835   18888 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:46:35.040855   18888 fix.go:54] fixHost starting: 
	I0318 04:46:35.041570   18888 fix.go:112] recreateIfNeeded on no-preload-204000: state=Stopped err=<nil>
	W0318 04:46:35.041598   18888 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:46:35.047078   18888 out.go:177] * Restarting existing qemu2 VM for "no-preload-204000" ...
	I0318 04:46:35.062280   18888 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/no-preload-204000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/no-preload-204000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/no-preload-204000/qemu.pid -device virtio-net-pci,netdev=net0,mac=16:c3:71:08:b9:58 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/no-preload-204000/disk.qcow2
	I0318 04:46:35.072421   18888 main.go:141] libmachine: STDOUT: 
	I0318 04:46:35.072485   18888 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:46:35.072565   18888 fix.go:56] duration metric: took 31.713166ms for fixHost
	I0318 04:46:35.072582   18888 start.go:83] releasing machines lock for "no-preload-204000", held for 31.851917ms
	W0318 04:46:35.072799   18888 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-204000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-204000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:46:35.081016   18888 out.go:177] 
	W0318 04:46:35.085095   18888 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:46:35.085128   18888 out.go:239] * 
	* 
	W0318 04:46:35.087782   18888 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:46:35.097041   18888 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-204000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.29.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-204000 -n no-preload-204000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-204000 -n no-preload-204000: exit status 7 (69.833417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-204000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-421000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-421000 -n old-k8s-version-421000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-421000 -n old-k8s-version-421000: exit status 7 (33.206958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-421000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-421000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-421000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-421000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.013416ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-421000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-421000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-421000 -n old-k8s-version-421000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-421000 -n old-k8s-version-421000: exit status 7 (30.874125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-421000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-421000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-421000 -n old-k8s-version-421000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-421000 -n old-k8s-version-421000: exit status 7 (30.708042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-421000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-421000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-421000 --alsologtostderr -v=1: exit status 83 (43.683834ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-421000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-421000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:46:31.818446   18911 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:46:31.818808   18911 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:46:31.818816   18911 out.go:304] Setting ErrFile to fd 2...
	I0318 04:46:31.818819   18911 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:46:31.818972   18911 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:46:31.819172   18911 out.go:298] Setting JSON to false
	I0318 04:46:31.819180   18911 mustload.go:65] Loading cluster: old-k8s-version-421000
	I0318 04:46:31.819361   18911 config.go:182] Loaded profile config "old-k8s-version-421000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0318 04:46:31.823365   18911 out.go:177] * The control-plane node old-k8s-version-421000 host is not running: state=Stopped
	I0318 04:46:31.827412   18911 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-421000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-421000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-421000 -n old-k8s-version-421000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-421000 -n old-k8s-version-421000: exit status 7 (30.355084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-421000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-421000 -n old-k8s-version-421000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-421000 -n old-k8s-version-421000: exit status 7 (30.701ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-421000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (10.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-177000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-177000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (10.120773458s)

                                                
                                                
-- stdout --
	* [embed-certs-177000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-177000" primary control-plane node in "embed-certs-177000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-177000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:46:32.291837   18934 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:46:32.291974   18934 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:46:32.291980   18934 out.go:304] Setting ErrFile to fd 2...
	I0318 04:46:32.291982   18934 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:46:32.292136   18934 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:46:32.293486   18934 out.go:298] Setting JSON to false
	I0318 04:46:32.309684   18934 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":9965,"bootTime":1710752427,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:46:32.309744   18934 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:46:32.314142   18934 out.go:177] * [embed-certs-177000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:46:32.325064   18934 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 04:46:32.322127   18934 notify.go:220] Checking for updates...
	I0318 04:46:32.333027   18934 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:46:32.336176   18934 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:46:32.339005   18934 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:46:32.342018   18934 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	I0318 04:46:32.345071   18934 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:46:32.348363   18934 config.go:182] Loaded profile config "multinode-969000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:46:32.348435   18934 config.go:182] Loaded profile config "no-preload-204000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0318 04:46:32.348476   18934 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:46:32.353064   18934 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 04:46:32.359020   18934 start.go:297] selected driver: qemu2
	I0318 04:46:32.359025   18934 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:46:32.359030   18934 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:46:32.361236   18934 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:46:32.364036   18934 out.go:177] * Automatically selected the socket_vmnet network
	I0318 04:46:32.367151   18934 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:46:32.367187   18934 cni.go:84] Creating CNI manager for ""
	I0318 04:46:32.367195   18934 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:46:32.367200   18934 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 04:46:32.367233   18934 start.go:340] cluster config:
	{Name:embed-certs-177000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-177000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:46:32.372007   18934 iso.go:125] acquiring lock: {Name:mkb8143674083e0c7a46a3ed751b3800392bcd24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:46:32.380055   18934 out.go:177] * Starting "embed-certs-177000" primary control-plane node in "embed-certs-177000" cluster
	I0318 04:46:32.384073   18934 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:46:32.384089   18934 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:46:32.384095   18934 cache.go:56] Caching tarball of preloaded images
	I0318 04:46:32.384151   18934 preload.go:173] Found /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:46:32.384156   18934 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:46:32.384222   18934 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/embed-certs-177000/config.json ...
	I0318 04:46:32.384240   18934 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/embed-certs-177000/config.json: {Name:mk7e9bca459902f54a06830fb7076e2431899162 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:46:32.384467   18934 start.go:360] acquireMachinesLock for embed-certs-177000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:46:32.384503   18934 start.go:364] duration metric: took 26.833µs to acquireMachinesLock for "embed-certs-177000"
	I0318 04:46:32.384517   18934 start.go:93] Provisioning new machine with config: &{Name:embed-certs-177000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.28.4 ClusterName:embed-certs-177000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:46:32.384545   18934 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:46:32.393066   18934 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 04:46:32.410372   18934 start.go:159] libmachine.API.Create for "embed-certs-177000" (driver="qemu2")
	I0318 04:46:32.410392   18934 client.go:168] LocalClient.Create starting
	I0318 04:46:32.410479   18934 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem
	I0318 04:46:32.410507   18934 main.go:141] libmachine: Decoding PEM data...
	I0318 04:46:32.410519   18934 main.go:141] libmachine: Parsing certificate...
	I0318 04:46:32.410559   18934 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem
	I0318 04:46:32.410581   18934 main.go:141] libmachine: Decoding PEM data...
	I0318 04:46:32.410587   18934 main.go:141] libmachine: Parsing certificate...
	I0318 04:46:32.410971   18934 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18429-15072/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:46:32.548751   18934 main.go:141] libmachine: Creating SSH key...
	I0318 04:46:32.678991   18934 main.go:141] libmachine: Creating Disk image...
	I0318 04:46:32.678998   18934 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:46:32.679156   18934 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/embed-certs-177000/disk.qcow2.raw /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/embed-certs-177000/disk.qcow2
	I0318 04:46:32.691521   18934 main.go:141] libmachine: STDOUT: 
	I0318 04:46:32.691540   18934 main.go:141] libmachine: STDERR: 
	I0318 04:46:32.691598   18934 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/embed-certs-177000/disk.qcow2 +20000M
	I0318 04:46:32.702521   18934 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:46:32.702537   18934 main.go:141] libmachine: STDERR: 
	I0318 04:46:32.702548   18934 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/embed-certs-177000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/embed-certs-177000/disk.qcow2
	I0318 04:46:32.702553   18934 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:46:32.702591   18934 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/embed-certs-177000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/embed-certs-177000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/embed-certs-177000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:f5:f0:6d:ea:13 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/embed-certs-177000/disk.qcow2
	I0318 04:46:32.704393   18934 main.go:141] libmachine: STDOUT: 
	I0318 04:46:32.704406   18934 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:46:32.704425   18934 client.go:171] duration metric: took 294.033292ms to LocalClient.Create
	I0318 04:46:34.705117   18934 start.go:128] duration metric: took 2.320594791s to createHost
	I0318 04:46:34.705251   18934 start.go:83] releasing machines lock for "embed-certs-177000", held for 2.320758083s
	W0318 04:46:34.705324   18934 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:46:34.717546   18934 out.go:177] * Deleting "embed-certs-177000" in qemu2 ...
	W0318 04:46:34.744153   18934 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:46:34.744182   18934 start.go:728] Will try again in 5 seconds ...
	I0318 04:46:39.744751   18934 start.go:360] acquireMachinesLock for embed-certs-177000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:46:39.745207   18934 start.go:364] duration metric: took 353.291µs to acquireMachinesLock for "embed-certs-177000"
	I0318 04:46:39.745351   18934 start.go:93] Provisioning new machine with config: &{Name:embed-certs-177000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.28.4 ClusterName:embed-certs-177000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:46:39.745648   18934 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:46:39.754363   18934 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 04:46:39.803545   18934 start.go:159] libmachine.API.Create for "embed-certs-177000" (driver="qemu2")
	I0318 04:46:39.803587   18934 client.go:168] LocalClient.Create starting
	I0318 04:46:39.803704   18934 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem
	I0318 04:46:39.803775   18934 main.go:141] libmachine: Decoding PEM data...
	I0318 04:46:39.803790   18934 main.go:141] libmachine: Parsing certificate...
	I0318 04:46:39.803849   18934 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem
	I0318 04:46:39.803891   18934 main.go:141] libmachine: Decoding PEM data...
	I0318 04:46:39.803910   18934 main.go:141] libmachine: Parsing certificate...
	I0318 04:46:39.804774   18934 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18429-15072/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:46:39.958809   18934 main.go:141] libmachine: Creating SSH key...
	I0318 04:46:40.304341   18934 main.go:141] libmachine: Creating Disk image...
	I0318 04:46:40.304350   18934 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:46:40.304610   18934 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/embed-certs-177000/disk.qcow2.raw /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/embed-certs-177000/disk.qcow2
	I0318 04:46:40.317729   18934 main.go:141] libmachine: STDOUT: 
	I0318 04:46:40.317747   18934 main.go:141] libmachine: STDERR: 
	I0318 04:46:40.317802   18934 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/embed-certs-177000/disk.qcow2 +20000M
	I0318 04:46:40.328512   18934 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:46:40.328526   18934 main.go:141] libmachine: STDERR: 
	I0318 04:46:40.328539   18934 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/embed-certs-177000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/embed-certs-177000/disk.qcow2
	I0318 04:46:40.328546   18934 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:46:40.328589   18934 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/embed-certs-177000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/embed-certs-177000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/embed-certs-177000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:a8:9d:32:25:62 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/embed-certs-177000/disk.qcow2
	I0318 04:46:40.330352   18934 main.go:141] libmachine: STDOUT: 
	I0318 04:46:40.330364   18934 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:46:40.330377   18934 client.go:171] duration metric: took 526.797167ms to LocalClient.Create
	I0318 04:46:42.332531   18934 start.go:128] duration metric: took 2.58688375s to createHost
	I0318 04:46:42.332581   18934 start.go:83] releasing machines lock for "embed-certs-177000", held for 2.587411s
	W0318 04:46:42.332859   18934 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-177000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-177000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:46:42.343941   18934 out.go:177] 
	W0318 04:46:42.350097   18934 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:46:42.350415   18934 out.go:239] * 
	* 
	W0318 04:46:42.353018   18934 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:46:42.366032   18934 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-177000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-177000 -n embed-certs-177000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-177000 -n embed-certs-177000: exit status 7 (62.873667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-177000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (10.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-204000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-204000 -n no-preload-204000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-204000 -n no-preload-204000: exit status 7 (33.239458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-204000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-204000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-204000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-204000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.335ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-204000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-204000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-204000 -n no-preload-204000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-204000 -n no-preload-204000: exit status 7 (31.095958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-204000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-204000 image list --format=json
start_stop_delete_test.go:304: v1.29.0-rc.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.10-0",
- 	"registry.k8s.io/kube-apiserver:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-controller-manager:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-proxy:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-scheduler:v1.29.0-rc.2",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-204000 -n no-preload-204000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-204000 -n no-preload-204000: exit status 7 (30.894ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-204000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-204000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-204000 --alsologtostderr -v=1: exit status 83 (42.826458ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-204000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-204000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:46:35.375777   18956 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:46:35.375944   18956 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:46:35.375953   18956 out.go:304] Setting ErrFile to fd 2...
	I0318 04:46:35.375955   18956 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:46:35.376075   18956 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:46:35.376302   18956 out.go:298] Setting JSON to false
	I0318 04:46:35.376310   18956 mustload.go:65] Loading cluster: no-preload-204000
	I0318 04:46:35.376505   18956 config.go:182] Loaded profile config "no-preload-204000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0318 04:46:35.380713   18956 out.go:177] * The control-plane node no-preload-204000 host is not running: state=Stopped
	I0318 04:46:35.384624   18956 out.go:177]   To start a cluster, run: "minikube start -p no-preload-204000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-204000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-204000 -n no-preload-204000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-204000 -n no-preload-204000: exit status 7 (30.425375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-204000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-204000 -n no-preload-204000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-204000 -n no-preload-204000: exit status 7 (31.327792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-204000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.88s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-103000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-103000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (9.809259833s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-103000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-103000" primary control-plane node in "default-k8s-diff-port-103000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-103000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:46:36.079631   18991 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:46:36.079772   18991 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:46:36.079775   18991 out.go:304] Setting ErrFile to fd 2...
	I0318 04:46:36.079778   18991 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:46:36.079901   18991 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:46:36.081086   18991 out.go:298] Setting JSON to false
	I0318 04:46:36.097301   18991 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":9969,"bootTime":1710752427,"procs":477,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:46:36.097358   18991 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:46:36.100819   18991 out.go:177] * [default-k8s-diff-port-103000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:46:36.107835   18991 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 04:46:36.107906   18991 notify.go:220] Checking for updates...
	I0318 04:46:36.114711   18991 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:46:36.118841   18991 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:46:36.121846   18991 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:46:36.128827   18991 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	I0318 04:46:36.135706   18991 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:46:36.139161   18991 config.go:182] Loaded profile config "embed-certs-177000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:46:36.139229   18991 config.go:182] Loaded profile config "multinode-969000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:46:36.139283   18991 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:46:36.143783   18991 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 04:46:36.150712   18991 start.go:297] selected driver: qemu2
	I0318 04:46:36.150719   18991 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:46:36.150725   18991 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:46:36.153086   18991 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:46:36.157831   18991 out.go:177] * Automatically selected the socket_vmnet network
	I0318 04:46:36.160807   18991 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:46:36.160852   18991 cni.go:84] Creating CNI manager for ""
	I0318 04:46:36.160859   18991 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:46:36.160864   18991 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 04:46:36.160890   18991 start.go:340] cluster config:
	{Name:default-k8s-diff-port-103000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-103000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:46:36.165884   18991 iso.go:125] acquiring lock: {Name:mkb8143674083e0c7a46a3ed751b3800392bcd24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:46:36.172706   18991 out.go:177] * Starting "default-k8s-diff-port-103000" primary control-plane node in "default-k8s-diff-port-103000" cluster
	I0318 04:46:36.176780   18991 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:46:36.176809   18991 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:46:36.176817   18991 cache.go:56] Caching tarball of preloaded images
	I0318 04:46:36.176881   18991 preload.go:173] Found /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:46:36.176887   18991 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:46:36.176951   18991 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/default-k8s-diff-port-103000/config.json ...
	I0318 04:46:36.176963   18991 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/default-k8s-diff-port-103000/config.json: {Name:mk9cd6b872d08534efef56e1ce90f72bf9c0649b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:46:36.177202   18991 start.go:360] acquireMachinesLock for default-k8s-diff-port-103000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:46:36.177237   18991 start.go:364] duration metric: took 27.125µs to acquireMachinesLock for "default-k8s-diff-port-103000"
	I0318 04:46:36.177251   18991 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-103000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-103000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:46:36.177285   18991 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:46:36.185786   18991 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 04:46:36.203898   18991 start.go:159] libmachine.API.Create for "default-k8s-diff-port-103000" (driver="qemu2")
	I0318 04:46:36.203929   18991 client.go:168] LocalClient.Create starting
	I0318 04:46:36.203982   18991 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem
	I0318 04:46:36.204019   18991 main.go:141] libmachine: Decoding PEM data...
	I0318 04:46:36.204029   18991 main.go:141] libmachine: Parsing certificate...
	I0318 04:46:36.204073   18991 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem
	I0318 04:46:36.204095   18991 main.go:141] libmachine: Decoding PEM data...
	I0318 04:46:36.204103   18991 main.go:141] libmachine: Parsing certificate...
	I0318 04:46:36.204505   18991 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18429-15072/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:46:36.343118   18991 main.go:141] libmachine: Creating SSH key...
	I0318 04:46:36.449313   18991 main.go:141] libmachine: Creating Disk image...
	I0318 04:46:36.449319   18991 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:46:36.449492   18991 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/default-k8s-diff-port-103000/disk.qcow2.raw /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/default-k8s-diff-port-103000/disk.qcow2
	I0318 04:46:36.461981   18991 main.go:141] libmachine: STDOUT: 
	I0318 04:46:36.462003   18991 main.go:141] libmachine: STDERR: 
	I0318 04:46:36.462048   18991 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/default-k8s-diff-port-103000/disk.qcow2 +20000M
	I0318 04:46:36.472638   18991 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:46:36.472655   18991 main.go:141] libmachine: STDERR: 
	I0318 04:46:36.472672   18991 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/default-k8s-diff-port-103000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/default-k8s-diff-port-103000/disk.qcow2
	I0318 04:46:36.472676   18991 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:46:36.472709   18991 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/default-k8s-diff-port-103000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/default-k8s-diff-port-103000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/default-k8s-diff-port-103000/qemu.pid -device virtio-net-pci,netdev=net0,mac=f6:be:1f:3c:60:66 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/default-k8s-diff-port-103000/disk.qcow2
	I0318 04:46:36.474468   18991 main.go:141] libmachine: STDOUT: 
	I0318 04:46:36.474486   18991 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:46:36.474505   18991 client.go:171] duration metric: took 270.577125ms to LocalClient.Create
	I0318 04:46:38.476692   18991 start.go:128] duration metric: took 2.299435s to createHost
	I0318 04:46:38.476756   18991 start.go:83] releasing machines lock for "default-k8s-diff-port-103000", held for 2.299561583s
	W0318 04:46:38.476821   18991 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:46:38.492898   18991 out.go:177] * Deleting "default-k8s-diff-port-103000" in qemu2 ...
	W0318 04:46:38.519984   18991 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:46:38.520008   18991 start.go:728] Will try again in 5 seconds ...
	I0318 04:46:43.522064   18991 start.go:360] acquireMachinesLock for default-k8s-diff-port-103000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:46:43.522434   18991 start.go:364] duration metric: took 294.542µs to acquireMachinesLock for "default-k8s-diff-port-103000"
	I0318 04:46:43.522627   18991 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-103000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-103000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:46:43.522940   18991 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:46:43.532554   18991 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 04:46:43.584222   18991 start.go:159] libmachine.API.Create for "default-k8s-diff-port-103000" (driver="qemu2")
	I0318 04:46:43.584272   18991 client.go:168] LocalClient.Create starting
	I0318 04:46:43.584382   18991 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem
	I0318 04:46:43.584439   18991 main.go:141] libmachine: Decoding PEM data...
	I0318 04:46:43.584457   18991 main.go:141] libmachine: Parsing certificate...
	I0318 04:46:43.584527   18991 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem
	I0318 04:46:43.584563   18991 main.go:141] libmachine: Decoding PEM data...
	I0318 04:46:43.584576   18991 main.go:141] libmachine: Parsing certificate...
	I0318 04:46:43.585203   18991 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18429-15072/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:46:43.735161   18991 main.go:141] libmachine: Creating SSH key...
	I0318 04:46:43.785986   18991 main.go:141] libmachine: Creating Disk image...
	I0318 04:46:43.785991   18991 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:46:43.786170   18991 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/default-k8s-diff-port-103000/disk.qcow2.raw /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/default-k8s-diff-port-103000/disk.qcow2
	I0318 04:46:43.798396   18991 main.go:141] libmachine: STDOUT: 
	I0318 04:46:43.798415   18991 main.go:141] libmachine: STDERR: 
	I0318 04:46:43.798474   18991 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/default-k8s-diff-port-103000/disk.qcow2 +20000M
	I0318 04:46:43.809110   18991 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:46:43.809137   18991 main.go:141] libmachine: STDERR: 
	I0318 04:46:43.809151   18991 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/default-k8s-diff-port-103000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/default-k8s-diff-port-103000/disk.qcow2
	I0318 04:46:43.809156   18991 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:46:43.809189   18991 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/default-k8s-diff-port-103000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/default-k8s-diff-port-103000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/default-k8s-diff-port-103000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:5a:8d:6f:31:7f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/default-k8s-diff-port-103000/disk.qcow2
	I0318 04:46:43.811003   18991 main.go:141] libmachine: STDOUT: 
	I0318 04:46:43.811023   18991 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:46:43.811035   18991 client.go:171] duration metric: took 226.764375ms to LocalClient.Create
	I0318 04:46:45.813182   18991 start.go:128] duration metric: took 2.290266917s to createHost
	I0318 04:46:45.813243   18991 start.go:83] releasing machines lock for "default-k8s-diff-port-103000", held for 2.290839667s
	W0318 04:46:45.813597   18991 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-103000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-103000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:46:45.826387   18991 out.go:177] 
	W0318 04:46:45.830513   18991 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:46:45.830537   18991 out.go:239] * 
	* 
	W0318 04:46:45.833380   18991 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:46:45.843077   18991 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-103000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-103000 -n default-k8s-diff-port-103000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-103000 -n default-k8s-diff-port-103000: exit status 7 (70.127834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-103000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-177000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-177000 create -f testdata/busybox.yaml: exit status 1 (31.350167ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-177000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-177000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-177000 -n embed-certs-177000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-177000 -n embed-certs-177000: exit status 7 (31.001792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-177000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-177000 -n embed-certs-177000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-177000 -n embed-certs-177000: exit status 7 (30.90775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-177000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-177000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-177000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-177000 describe deploy/metrics-server -n kube-system: exit status 1 (27.289667ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-177000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-177000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-177000 -n embed-certs-177000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-177000 -n embed-certs-177000: exit status 7 (31.214709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-177000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-103000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-103000 create -f testdata/busybox.yaml: exit status 1 (31.928541ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-103000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-103000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-103000 -n default-k8s-diff-port-103000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-103000 -n default-k8s-diff-port-103000: exit status 7 (34.018292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-103000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-103000 -n default-k8s-diff-port-103000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-103000 -n default-k8s-diff-port-103000: exit status 7 (37.55525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-103000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-103000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-103000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-103000 describe deploy/metrics-server -n kube-system: exit status 1 (26.595542ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-103000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-103000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-103000 -n default-k8s-diff-port-103000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-103000 -n default-k8s-diff-port-103000: exit status 7 (36.220708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-103000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-177000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-177000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (5.22489975s)

                                                
                                                
-- stdout --
	* [embed-certs-177000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-177000" primary control-plane node in "embed-certs-177000" cluster
	* Restarting existing qemu2 VM for "embed-certs-177000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-177000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:46:46.080961   19055 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:46:46.081057   19055 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:46:46.081060   19055 out.go:304] Setting ErrFile to fd 2...
	I0318 04:46:46.081063   19055 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:46:46.081186   19055 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:46:46.083465   19055 out.go:298] Setting JSON to false
	I0318 04:46:46.100709   19055 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":9979,"bootTime":1710752427,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:46:46.100774   19055 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:46:46.105375   19055 out.go:177] * [embed-certs-177000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:46:46.111443   19055 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 04:46:46.115401   19055 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:46:46.111490   19055 notify.go:220] Checking for updates...
	I0318 04:46:46.122351   19055 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:46:46.140349   19055 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:46:46.148368   19055 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	I0318 04:46:46.157359   19055 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:46:46.161730   19055 config.go:182] Loaded profile config "embed-certs-177000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:46:46.161973   19055 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:46:46.166384   19055 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 04:46:46.173366   19055 start.go:297] selected driver: qemu2
	I0318 04:46:46.173376   19055 start.go:901] validating driver "qemu2" against &{Name:embed-certs-177000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:embed-certs-177000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:46:46.173435   19055 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:46:46.176244   19055 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:46:46.176307   19055 cni.go:84] Creating CNI manager for ""
	I0318 04:46:46.176319   19055 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:46:46.176343   19055 start.go:340] cluster config:
	{Name:embed-certs-177000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-177000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:46:46.181689   19055 iso.go:125] acquiring lock: {Name:mkb8143674083e0c7a46a3ed751b3800392bcd24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:46:46.190403   19055 out.go:177] * Starting "embed-certs-177000" primary control-plane node in "embed-certs-177000" cluster
	I0318 04:46:46.194387   19055 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:46:46.194406   19055 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:46:46.194418   19055 cache.go:56] Caching tarball of preloaded images
	I0318 04:46:46.194479   19055 preload.go:173] Found /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:46:46.194485   19055 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:46:46.194552   19055 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/embed-certs-177000/config.json ...
	I0318 04:46:46.194789   19055 start.go:360] acquireMachinesLock for embed-certs-177000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:46:46.194818   19055 start.go:364] duration metric: took 21.625µs to acquireMachinesLock for "embed-certs-177000"
	I0318 04:46:46.194827   19055 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:46:46.194832   19055 fix.go:54] fixHost starting: 
	I0318 04:46:46.194942   19055 fix.go:112] recreateIfNeeded on embed-certs-177000: state=Stopped err=<nil>
	W0318 04:46:46.194951   19055 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:46:46.199337   19055 out.go:177] * Restarting existing qemu2 VM for "embed-certs-177000" ...
	I0318 04:46:46.210395   19055 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/embed-certs-177000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/embed-certs-177000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/embed-certs-177000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:a8:9d:32:25:62 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/embed-certs-177000/disk.qcow2
	I0318 04:46:46.212238   19055 main.go:141] libmachine: STDOUT: 
	I0318 04:46:46.212257   19055 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:46:46.212283   19055 fix.go:56] duration metric: took 17.452ms for fixHost
	I0318 04:46:46.212286   19055 start.go:83] releasing machines lock for "embed-certs-177000", held for 17.464958ms
	W0318 04:46:46.212293   19055 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:46:46.212333   19055 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:46:46.212338   19055 start.go:728] Will try again in 5 seconds ...
	I0318 04:46:51.214493   19055 start.go:360] acquireMachinesLock for embed-certs-177000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:46:51.214807   19055 start.go:364] duration metric: took 230.5µs to acquireMachinesLock for "embed-certs-177000"
	I0318 04:46:51.214920   19055 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:46:51.214940   19055 fix.go:54] fixHost starting: 
	I0318 04:46:51.215697   19055 fix.go:112] recreateIfNeeded on embed-certs-177000: state=Stopped err=<nil>
	W0318 04:46:51.215723   19055 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:46:51.221195   19055 out.go:177] * Restarting existing qemu2 VM for "embed-certs-177000" ...
	I0318 04:46:51.228395   19055 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/embed-certs-177000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/embed-certs-177000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/embed-certs-177000/qemu.pid -device virtio-net-pci,netdev=net0,mac=6a:a8:9d:32:25:62 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/embed-certs-177000/disk.qcow2
	I0318 04:46:51.237930   19055 main.go:141] libmachine: STDOUT: 
	I0318 04:46:51.238004   19055 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:46:51.238117   19055 fix.go:56] duration metric: took 23.1425ms for fixHost
	I0318 04:46:51.238131   19055 start.go:83] releasing machines lock for "embed-certs-177000", held for 23.302917ms
	W0318 04:46:51.238346   19055 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-177000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-177000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:46:51.246095   19055 out.go:177] 
	W0318 04:46:51.249166   19055 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:46:51.249213   19055 out.go:239] * 
	* 
	W0318 04:46:51.251852   19055 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:46:51.259113   19055 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-177000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-177000 -n embed-certs-177000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-177000 -n embed-certs-177000: exit status 7 (67.252084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-177000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-103000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-103000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4: exit status 80 (5.432006167s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-103000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-103000" primary control-plane node in "default-k8s-diff-port-103000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-103000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-103000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:46:49.272585   19088 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:46:49.272697   19088 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:46:49.272705   19088 out.go:304] Setting ErrFile to fd 2...
	I0318 04:46:49.272708   19088 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:46:49.272831   19088 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:46:49.273798   19088 out.go:298] Setting JSON to false
	I0318 04:46:49.290053   19088 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":9982,"bootTime":1710752427,"procs":480,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:46:49.290124   19088 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:46:49.295485   19088 out.go:177] * [default-k8s-diff-port-103000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:46:49.301382   19088 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 04:46:49.301426   19088 notify.go:220] Checking for updates...
	I0318 04:46:49.309379   19088 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:46:49.312392   19088 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:46:49.315410   19088 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:46:49.318369   19088 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	I0318 04:46:49.321390   19088 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:46:49.324683   19088 config.go:182] Loaded profile config "default-k8s-diff-port-103000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:46:49.324955   19088 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:46:49.328344   19088 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 04:46:49.335436   19088 start.go:297] selected driver: qemu2
	I0318 04:46:49.335449   19088 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-103000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-103000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:46:49.335510   19088 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:46:49.337830   19088 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 04:46:49.337876   19088 cni.go:84] Creating CNI manager for ""
	I0318 04:46:49.337884   19088 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:46:49.337913   19088 start.go:340] cluster config:
	{Name:default-k8s-diff-port-103000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-103000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:46:49.342298   19088 iso.go:125] acquiring lock: {Name:mkb8143674083e0c7a46a3ed751b3800392bcd24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:46:49.350461   19088 out.go:177] * Starting "default-k8s-diff-port-103000" primary control-plane node in "default-k8s-diff-port-103000" cluster
	I0318 04:46:49.355383   19088 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:46:49.355399   19088 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:46:49.355414   19088 cache.go:56] Caching tarball of preloaded images
	I0318 04:46:49.355467   19088 preload.go:173] Found /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:46:49.355473   19088 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on docker
	I0318 04:46:49.355548   19088 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/default-k8s-diff-port-103000/config.json ...
	I0318 04:46:49.356055   19088 start.go:360] acquireMachinesLock for default-k8s-diff-port-103000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:46:49.356081   19088 start.go:364] duration metric: took 19.834µs to acquireMachinesLock for "default-k8s-diff-port-103000"
	I0318 04:46:49.356090   19088 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:46:49.356094   19088 fix.go:54] fixHost starting: 
	I0318 04:46:49.356211   19088 fix.go:112] recreateIfNeeded on default-k8s-diff-port-103000: state=Stopped err=<nil>
	W0318 04:46:49.356221   19088 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:46:49.359424   19088 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-103000" ...
	I0318 04:46:49.366418   19088 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/default-k8s-diff-port-103000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/default-k8s-diff-port-103000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/default-k8s-diff-port-103000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:5a:8d:6f:31:7f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/default-k8s-diff-port-103000/disk.qcow2
	I0318 04:46:49.368440   19088 main.go:141] libmachine: STDOUT: 
	I0318 04:46:49.368460   19088 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:46:49.368488   19088 fix.go:56] duration metric: took 12.393666ms for fixHost
	I0318 04:46:49.368493   19088 start.go:83] releasing machines lock for "default-k8s-diff-port-103000", held for 12.408333ms
	W0318 04:46:49.368501   19088 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:46:49.368531   19088 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:46:49.368536   19088 start.go:728] Will try again in 5 seconds ...
	I0318 04:46:54.370595   19088 start.go:360] acquireMachinesLock for default-k8s-diff-port-103000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:46:54.602945   19088 start.go:364] duration metric: took 232.284458ms to acquireMachinesLock for "default-k8s-diff-port-103000"
	I0318 04:46:54.603072   19088 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:46:54.603179   19088 fix.go:54] fixHost starting: 
	I0318 04:46:54.603941   19088 fix.go:112] recreateIfNeeded on default-k8s-diff-port-103000: state=Stopped err=<nil>
	W0318 04:46:54.603970   19088 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:46:54.613482   19088 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-103000" ...
	I0318 04:46:54.621642   19088 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/default-k8s-diff-port-103000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/default-k8s-diff-port-103000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/default-k8s-diff-port-103000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:5a:8d:6f:31:7f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/default-k8s-diff-port-103000/disk.qcow2
	I0318 04:46:54.631874   19088 main.go:141] libmachine: STDOUT: 
	I0318 04:46:54.631972   19088 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:46:54.632046   19088 fix.go:56] duration metric: took 28.869833ms for fixHost
	I0318 04:46:54.632070   19088 start.go:83] releasing machines lock for "default-k8s-diff-port-103000", held for 29.101834ms
	W0318 04:46:54.632283   19088 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-103000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-103000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:46:54.640426   19088 out.go:177] 
	W0318 04:46:54.644368   19088 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:46:54.644398   19088 out.go:239] * 
	* 
	W0318 04:46:54.646540   19088 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:46:54.657427   19088 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-103000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.28.4": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-103000 -n default-k8s-diff-port-103000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-103000 -n default-k8s-diff-port-103000: exit status 7 (66.673958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-103000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-177000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-177000 -n embed-certs-177000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-177000 -n embed-certs-177000: exit status 7 (33.535375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-177000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-177000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-177000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-177000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.645667ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-177000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-177000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-177000 -n embed-certs-177000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-177000 -n embed-certs-177000: exit status 7 (30.781209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-177000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-177000 image list --format=json
start_stop_delete_test.go:304: v1.28.4 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.4",
- 	"registry.k8s.io/kube-controller-manager:v1.28.4",
- 	"registry.k8s.io/kube-proxy:v1.28.4",
- 	"registry.k8s.io/kube-scheduler:v1.28.4",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-177000 -n embed-certs-177000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-177000 -n embed-certs-177000: exit status 7 (30.991333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-177000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-177000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-177000 --alsologtostderr -v=1: exit status 83 (43.607208ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-177000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-177000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:46:51.535313   19107 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:46:51.535481   19107 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:46:51.535484   19107 out.go:304] Setting ErrFile to fd 2...
	I0318 04:46:51.535487   19107 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:46:51.535639   19107 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:46:51.535865   19107 out.go:298] Setting JSON to false
	I0318 04:46:51.535873   19107 mustload.go:65] Loading cluster: embed-certs-177000
	I0318 04:46:51.536079   19107 config.go:182] Loaded profile config "embed-certs-177000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:46:51.540893   19107 out.go:177] * The control-plane node embed-certs-177000 host is not running: state=Stopped
	I0318 04:46:51.544926   19107 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-177000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-177000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-177000 -n embed-certs-177000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-177000 -n embed-certs-177000: exit status 7 (30.801125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-177000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-177000 -n embed-certs-177000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-177000 -n embed-certs-177000: exit status 7 (30.63525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-177000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (10.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-256000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-256000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2: exit status 80 (10.02623975s)

                                                
                                                
-- stdout --
	* [newest-cni-256000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-256000" primary control-plane node in "newest-cni-256000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-256000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:46:52.007879   19130 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:46:52.008003   19130 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:46:52.008006   19130 out.go:304] Setting ErrFile to fd 2...
	I0318 04:46:52.008008   19130 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:46:52.008125   19130 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:46:52.009212   19130 out.go:298] Setting JSON to false
	I0318 04:46:52.025435   19130 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":9985,"bootTime":1710752427,"procs":478,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:46:52.025496   19130 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:46:52.029940   19130 out.go:177] * [newest-cni-256000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:46:52.042918   19130 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 04:46:52.038002   19130 notify.go:220] Checking for updates...
	I0318 04:46:52.049907   19130 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:46:52.053931   19130 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:46:52.056927   19130 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:46:52.059875   19130 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	I0318 04:46:52.062910   19130 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:46:52.066359   19130 config.go:182] Loaded profile config "default-k8s-diff-port-103000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:46:52.066421   19130 config.go:182] Loaded profile config "multinode-969000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:46:52.066482   19130 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:46:52.070874   19130 out.go:177] * Using the qemu2 driver based on user configuration
	I0318 04:46:52.077949   19130 start.go:297] selected driver: qemu2
	I0318 04:46:52.077956   19130 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:46:52.077963   19130 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:46:52.080210   19130 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0318 04:46:52.080242   19130 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0318 04:46:52.088924   19130 out.go:177] * Automatically selected the socket_vmnet network
	I0318 04:46:52.092108   19130 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0318 04:46:52.092144   19130 cni.go:84] Creating CNI manager for ""
	I0318 04:46:52.092154   19130 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:46:52.092160   19130 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 04:46:52.092205   19130 start.go:340] cluster config:
	{Name:newest-cni-256000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-256000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:46:52.097193   19130 iso.go:125] acquiring lock: {Name:mkb8143674083e0c7a46a3ed751b3800392bcd24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:46:52.105910   19130 out.go:177] * Starting "newest-cni-256000" primary control-plane node in "newest-cni-256000" cluster
	I0318 04:46:52.108909   19130 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0318 04:46:52.108927   19130 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0318 04:46:52.108940   19130 cache.go:56] Caching tarball of preloaded images
	I0318 04:46:52.109004   19130 preload.go:173] Found /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:46:52.109011   19130 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on docker
	I0318 04:46:52.109074   19130 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/newest-cni-256000/config.json ...
	I0318 04:46:52.109087   19130 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/newest-cni-256000/config.json: {Name:mk167f449eaae803f8777408e5d7fba164252c6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:46:52.109342   19130 start.go:360] acquireMachinesLock for newest-cni-256000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:46:52.109380   19130 start.go:364] duration metric: took 31.375µs to acquireMachinesLock for "newest-cni-256000"
	I0318 04:46:52.109398   19130 start.go:93] Provisioning new machine with config: &{Name:newest-cni-256000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-256000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:46:52.109431   19130 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:46:52.116898   19130 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 04:46:52.135589   19130 start.go:159] libmachine.API.Create for "newest-cni-256000" (driver="qemu2")
	I0318 04:46:52.135613   19130 client.go:168] LocalClient.Create starting
	I0318 04:46:52.135673   19130 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem
	I0318 04:46:52.135703   19130 main.go:141] libmachine: Decoding PEM data...
	I0318 04:46:52.135712   19130 main.go:141] libmachine: Parsing certificate...
	I0318 04:46:52.135760   19130 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem
	I0318 04:46:52.135783   19130 main.go:141] libmachine: Decoding PEM data...
	I0318 04:46:52.135791   19130 main.go:141] libmachine: Parsing certificate...
	I0318 04:46:52.136157   19130 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18429-15072/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:46:52.288298   19130 main.go:141] libmachine: Creating SSH key...
	I0318 04:46:52.574673   19130 main.go:141] libmachine: Creating Disk image...
	I0318 04:46:52.574686   19130 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:46:52.574888   19130 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/newest-cni-256000/disk.qcow2.raw /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/newest-cni-256000/disk.qcow2
	I0318 04:46:52.587901   19130 main.go:141] libmachine: STDOUT: 
	I0318 04:46:52.587924   19130 main.go:141] libmachine: STDERR: 
	I0318 04:46:52.587983   19130 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/newest-cni-256000/disk.qcow2 +20000M
	I0318 04:46:52.598691   19130 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:46:52.598707   19130 main.go:141] libmachine: STDERR: 
	I0318 04:46:52.598726   19130 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/newest-cni-256000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/newest-cni-256000/disk.qcow2
	I0318 04:46:52.598730   19130 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:46:52.598766   19130 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/newest-cni-256000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/newest-cni-256000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/newest-cni-256000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:5e:60:b5:35:69 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/newest-cni-256000/disk.qcow2
	I0318 04:46:52.600537   19130 main.go:141] libmachine: STDOUT: 
	I0318 04:46:52.600556   19130 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:46:52.600574   19130 client.go:171] duration metric: took 464.969625ms to LocalClient.Create
	I0318 04:46:54.602754   19130 start.go:128] duration metric: took 2.493377292s to createHost
	I0318 04:46:54.602811   19130 start.go:83] releasing machines lock for "newest-cni-256000", held for 2.4934925s
	W0318 04:46:54.602872   19130 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:46:54.618560   19130 out.go:177] * Deleting "newest-cni-256000" in qemu2 ...
	W0318 04:46:54.670571   19130 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:46:54.670652   19130 start.go:728] Will try again in 5 seconds ...
	I0318 04:46:59.671941   19130 start.go:360] acquireMachinesLock for newest-cni-256000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:46:59.672418   19130 start.go:364] duration metric: took 358.542µs to acquireMachinesLock for "newest-cni-256000"
	I0318 04:46:59.672998   19130 start.go:93] Provisioning new machine with config: &{Name:newest-cni-256000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-256000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:
/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0318 04:46:59.673307   19130 start.go:125] createHost starting for "" (driver="qemu2")
	I0318 04:46:59.677960   19130 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 04:46:59.728364   19130 start.go:159] libmachine.API.Create for "newest-cni-256000" (driver="qemu2")
	I0318 04:46:59.728414   19130 client.go:168] LocalClient.Create starting
	I0318 04:46:59.728528   19130 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/ca.pem
	I0318 04:46:59.728589   19130 main.go:141] libmachine: Decoding PEM data...
	I0318 04:46:59.728607   19130 main.go:141] libmachine: Parsing certificate...
	I0318 04:46:59.728678   19130 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18429-15072/.minikube/certs/cert.pem
	I0318 04:46:59.728724   19130 main.go:141] libmachine: Decoding PEM data...
	I0318 04:46:59.728739   19130 main.go:141] libmachine: Parsing certificate...
	I0318 04:46:59.729281   19130 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18429-15072/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso...
	I0318 04:46:59.879144   19130 main.go:141] libmachine: Creating SSH key...
	I0318 04:46:59.935491   19130 main.go:141] libmachine: Creating Disk image...
	I0318 04:46:59.935496   19130 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0318 04:46:59.935667   19130 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/newest-cni-256000/disk.qcow2.raw /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/newest-cni-256000/disk.qcow2
	I0318 04:46:59.948181   19130 main.go:141] libmachine: STDOUT: 
	I0318 04:46:59.948200   19130 main.go:141] libmachine: STDERR: 
	I0318 04:46:59.948259   19130 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/newest-cni-256000/disk.qcow2 +20000M
	I0318 04:46:59.958956   19130 main.go:141] libmachine: STDOUT: Image resized.
	
	I0318 04:46:59.958971   19130 main.go:141] libmachine: STDERR: 
	I0318 04:46:59.958982   19130 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/newest-cni-256000/disk.qcow2.raw and /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/newest-cni-256000/disk.qcow2
	I0318 04:46:59.958989   19130 main.go:141] libmachine: Starting QEMU VM...
	I0318 04:46:59.959028   19130 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/newest-cni-256000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/newest-cni-256000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/newest-cni-256000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:d6:bf:6d:92:12 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/newest-cni-256000/disk.qcow2
	I0318 04:46:59.960717   19130 main.go:141] libmachine: STDOUT: 
	I0318 04:46:59.960732   19130 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:46:59.960747   19130 client.go:171] duration metric: took 232.333459ms to LocalClient.Create
	I0318 04:47:01.962868   19130 start.go:128] duration metric: took 2.289598292s to createHost
	I0318 04:47:01.962954   19130 start.go:83] releasing machines lock for "newest-cni-256000", held for 2.290579125s
	W0318 04:47:01.963495   19130 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-256000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-256000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:47:01.972155   19130 out.go:177] 
	W0318 04:47:01.980266   19130 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:47:01.980321   19130 out.go:239] * 
	* 
	W0318 04:47:01.982971   19130 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:47:01.994189   19130 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-256000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-256000 -n newest-cni-256000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-256000 -n newest-cni-256000: exit status 7 (68.608ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-256000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (10.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-103000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-103000 -n default-k8s-diff-port-103000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-103000 -n default-k8s-diff-port-103000: exit status 7 (33.176459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-103000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-103000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-103000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-103000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.975084ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-103000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-103000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-103000 -n default-k8s-diff-port-103000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-103000 -n default-k8s-diff-port-103000: exit status 7 (31.300417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-103000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-103000 image list --format=json
start_stop_delete_test.go:304: v1.28.4 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.4",
- 	"registry.k8s.io/kube-controller-manager:v1.28.4",
- 	"registry.k8s.io/kube-proxy:v1.28.4",
- 	"registry.k8s.io/kube-scheduler:v1.28.4",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-103000 -n default-k8s-diff-port-103000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-103000 -n default-k8s-diff-port-103000: exit status 7 (31.409333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-103000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-103000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-103000 --alsologtostderr -v=1: exit status 83 (46.092917ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-103000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-103000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:46:54.935767   19152 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:46:54.935936   19152 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:46:54.935939   19152 out.go:304] Setting ErrFile to fd 2...
	I0318 04:46:54.935941   19152 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:46:54.936074   19152 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:46:54.936290   19152 out.go:298] Setting JSON to false
	I0318 04:46:54.936298   19152 mustload.go:65] Loading cluster: default-k8s-diff-port-103000
	I0318 04:46:54.936492   19152 config.go:182] Loaded profile config "default-k8s-diff-port-103000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:46:54.940704   19152 out.go:177] * The control-plane node default-k8s-diff-port-103000 host is not running: state=Stopped
	I0318 04:46:54.947803   19152 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-103000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-103000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-103000 -n default-k8s-diff-port-103000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-103000 -n default-k8s-diff-port-103000: exit status 7 (31.039958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-103000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-103000 -n default-k8s-diff-port-103000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-103000 -n default-k8s-diff-port-103000: exit status 7 (30.382542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-103000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-256000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-256000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2: exit status 80 (5.184768875s)

                                                
                                                
-- stdout --
	* [newest-cni-256000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-256000" primary control-plane node in "newest-cni-256000" cluster
	* Restarting existing qemu2 VM for "newest-cni-256000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-256000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:47:05.465371   19210 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:47:05.465487   19210 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:47:05.465490   19210 out.go:304] Setting ErrFile to fd 2...
	I0318 04:47:05.465493   19210 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:47:05.465630   19210 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:47:05.466640   19210 out.go:298] Setting JSON to false
	I0318 04:47:05.482737   19210 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":9998,"bootTime":1710752427,"procs":481,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:47:05.482816   19210 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:47:05.486982   19210 out.go:177] * [newest-cni-256000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:47:05.494024   19210 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 04:47:05.494053   19210 notify.go:220] Checking for updates...
	I0318 04:47:05.497875   19210 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:47:05.501887   19210 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:47:05.504954   19210 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:47:05.507893   19210 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	I0318 04:47:05.510947   19210 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:47:05.514249   19210 config.go:182] Loaded profile config "newest-cni-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0318 04:47:05.514493   19210 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:47:05.518858   19210 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 04:47:05.525937   19210 start.go:297] selected driver: qemu2
	I0318 04:47:05.525943   19210 start.go:901] validating driver "qemu2" against &{Name:newest-cni-256000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-256000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPo
rts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:47:05.526027   19210 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:47:05.528190   19210 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0318 04:47:05.528242   19210 cni.go:84] Creating CNI manager for ""
	I0318 04:47:05.528248   19210 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:47:05.528279   19210 start.go:340] cluster config:
	{Name:newest-cni-256000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-256000 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:47:05.532483   19210 iso.go:125] acquiring lock: {Name:mkb8143674083e0c7a46a3ed751b3800392bcd24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:47:05.539942   19210 out.go:177] * Starting "newest-cni-256000" primary control-plane node in "newest-cni-256000" cluster
	I0318 04:47:05.543875   19210 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0318 04:47:05.543894   19210 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0318 04:47:05.543900   19210 cache.go:56] Caching tarball of preloaded images
	I0318 04:47:05.543957   19210 preload.go:173] Found /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0318 04:47:05.543964   19210 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on docker
	I0318 04:47:05.544022   19210 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/newest-cni-256000/config.json ...
	I0318 04:47:05.544492   19210 start.go:360] acquireMachinesLock for newest-cni-256000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:47:05.544525   19210 start.go:364] duration metric: took 27.667µs to acquireMachinesLock for "newest-cni-256000"
	I0318 04:47:05.544534   19210 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:47:05.544538   19210 fix.go:54] fixHost starting: 
	I0318 04:47:05.544657   19210 fix.go:112] recreateIfNeeded on newest-cni-256000: state=Stopped err=<nil>
	W0318 04:47:05.544665   19210 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:47:05.548965   19210 out.go:177] * Restarting existing qemu2 VM for "newest-cni-256000" ...
	I0318 04:47:05.556920   19210 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/newest-cni-256000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/newest-cni-256000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/newest-cni-256000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:d6:bf:6d:92:12 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/newest-cni-256000/disk.qcow2
	I0318 04:47:05.558772   19210 main.go:141] libmachine: STDOUT: 
	I0318 04:47:05.558791   19210 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:47:05.558819   19210 fix.go:56] duration metric: took 14.280417ms for fixHost
	I0318 04:47:05.558823   19210 start.go:83] releasing machines lock for "newest-cni-256000", held for 14.29475ms
	W0318 04:47:05.558831   19210 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:47:05.558865   19210 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:47:05.558870   19210 start.go:728] Will try again in 5 seconds ...
	I0318 04:47:10.560951   19210 start.go:360] acquireMachinesLock for newest-cni-256000: {Name:mkdef65b5c2b3344d8453e477bf2f170fbff3359 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 04:47:10.561356   19210 start.go:364] duration metric: took 301.333µs to acquireMachinesLock for "newest-cni-256000"
	I0318 04:47:10.561519   19210 start.go:96] Skipping create...Using existing machine configuration
	I0318 04:47:10.561543   19210 fix.go:54] fixHost starting: 
	I0318 04:47:10.562253   19210 fix.go:112] recreateIfNeeded on newest-cni-256000: state=Stopped err=<nil>
	W0318 04:47:10.562281   19210 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 04:47:10.569707   19210 out.go:177] * Restarting existing qemu2 VM for "newest-cni-256000" ...
	I0318 04:47:10.573735   19210 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/newest-cni-256000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/18429-15072/.minikube/machines/newest-cni-256000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/newest-cni-256000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:d6:bf:6d:92:12 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/18429-15072/.minikube/machines/newest-cni-256000/disk.qcow2
	I0318 04:47:10.583742   19210 main.go:141] libmachine: STDOUT: 
	I0318 04:47:10.583807   19210 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0318 04:47:10.583896   19210 fix.go:56] duration metric: took 22.358125ms for fixHost
	I0318 04:47:10.583909   19210 start.go:83] releasing machines lock for "newest-cni-256000", held for 22.532625ms
	W0318 04:47:10.584098   19210 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-256000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-256000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0318 04:47:10.591650   19210 out.go:177] 
	W0318 04:47:10.595699   19210 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0318 04:47:10.595719   19210 out.go:239] * 
	* 
	W0318 04:47:10.598562   19210 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:47:10.605693   19210 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-256000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.29.0-rc.2": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-256000 -n newest-cni-256000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-256000 -n newest-cni-256000: exit status 7 (70.041083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-256000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-256000 image list --format=json
start_stop_delete_test.go:304: v1.29.0-rc.2 images missing (-want +got):
[]string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.10-0",
- 	"registry.k8s.io/kube-apiserver:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-controller-manager:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-proxy:v1.29.0-rc.2",
- 	"registry.k8s.io/kube-scheduler:v1.29.0-rc.2",
- 	"registry.k8s.io/pause:3.9",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-256000 -n newest-cni-256000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-256000 -n newest-cni-256000: exit status 7 (32.311916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-256000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-256000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-256000 --alsologtostderr -v=1: exit status 83 (44.147209ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-256000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-256000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:47:10.798487   19224 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:47:10.798655   19224 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:47:10.798658   19224 out.go:304] Setting ErrFile to fd 2...
	I0318 04:47:10.798660   19224 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:47:10.798773   19224 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:47:10.798974   19224 out.go:298] Setting JSON to false
	I0318 04:47:10.798983   19224 mustload.go:65] Loading cluster: newest-cni-256000
	I0318 04:47:10.799174   19224 config.go:182] Loaded profile config "newest-cni-256000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.0-rc.2
	I0318 04:47:10.803978   19224 out.go:177] * The control-plane node newest-cni-256000 host is not running: state=Stopped
	I0318 04:47:10.807240   19224 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-256000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-256000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-256000 -n newest-cni-256000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-256000 -n newest-cni-256000: exit status 7 (32.021333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-256000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-256000 -n newest-cni-256000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-256000 -n newest-cni-256000: exit status 7 (32.128958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-256000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                    

Test pass (86/266)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.24
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.23
12 TestDownloadOnly/v1.28.4/json-events 22.34
13 TestDownloadOnly/v1.28.4/preload-exists 0
16 TestDownloadOnly/v1.28.4/kubectl 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.08
18 TestDownloadOnly/v1.28.4/DeleteAll 0.23
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.23
21 TestDownloadOnly/v1.29.0-rc.2/json-events 20.57
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.08
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.23
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.23
30 TestBinaryMirror 0.34
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
44 TestHyperKitDriverInstallOrUpdate 9
48 TestErrorSpam/start 0.4
49 TestErrorSpam/status 0.1
50 TestErrorSpam/pause 0.13
51 TestErrorSpam/unpause 0.12
52 TestErrorSpam/stop 8.66
55 TestFunctional/serial/CopySyncFile 0
57 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/CacheCmd/cache/add_remote 6.05
64 TestFunctional/serial/CacheCmd/cache/add_local 1.16
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.04
69 TestFunctional/serial/CacheCmd/cache/delete 0.07
78 TestFunctional/parallel/ConfigCmd 0.23
80 TestFunctional/parallel/DryRun 0.27
81 TestFunctional/parallel/InternationalLanguage 0.12
87 TestFunctional/parallel/AddonsCmd 0.12
102 TestFunctional/parallel/License 1.44
103 TestFunctional/parallel/Version/short 0.04
110 TestFunctional/parallel/ImageCommands/Setup 5.59
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.07
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.13
134 TestFunctional/parallel/ProfileCmd/profile_not_create 0.14
135 TestFunctional/parallel/ProfileCmd/profile_list 0.11
136 TestFunctional/parallel/ProfileCmd/profile_json_output 0.11
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.15
144 TestFunctional/delete_addon-resizer_images 0.18
145 TestFunctional/delete_my-image_image 0.04
146 TestFunctional/delete_minikube_cached_images 0.04
175 TestJSONOutput/start/Audit 0
177 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/pause/Audit 0
183 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/unpause/Audit 0
189 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/stop/Command 3.53
193 TestJSONOutput/stop/Audit 0
195 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
197 TestErrorJSONOutput 0.32
202 TestMainNoArgs 0.04
249 TestStoppedBinaryUpgrade/Setup 6.02
261 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
265 TestNoKubernetes/serial/VerifyK8sNotRunning 0.04
266 TestNoKubernetes/serial/ProfileList 31.39
267 TestNoKubernetes/serial/Stop 3.14
269 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.04
277 TestStoppedBinaryUpgrade/MinikubeLogs 0.81
286 TestStartStop/group/old-k8s-version/serial/Stop 3.45
289 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.11
291 TestStartStop/group/no-preload/serial/Stop 3.42
292 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.12
308 TestStartStop/group/embed-certs/serial/Stop 3.29
310 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.1
313 TestStartStop/group/default-k8s-diff-port/serial/Stop 2.96
314 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.12
326 TestStartStop/group/newest-cni/serial/DeployApp 0
327 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
328 TestStartStop/group/newest-cni/serial/Stop 3.17
329 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
331 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
332 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-382000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-382000: exit status 85 (98.781791ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-382000 | jenkins | v1.32.0 | 18 Mar 24 04:18 PDT |          |
	|         | -p download-only-382000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 04:18:39
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 04:18:39.230977   15483 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:18:39.231145   15483 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:18:39.231148   15483 out.go:304] Setting ErrFile to fd 2...
	I0318 04:18:39.231150   15483 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:18:39.231278   15483 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	W0318 04:18:39.231361   15483 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18429-15072/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18429-15072/.minikube/config/config.json: no such file or directory
	I0318 04:18:39.232580   15483 out.go:298] Setting JSON to true
	I0318 04:18:39.250261   15483 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8292,"bootTime":1710752427,"procs":482,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:18:39.250322   15483 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:18:39.254531   15483 out.go:97] [download-only-382000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:18:39.257549   15483 out.go:169] MINIKUBE_LOCATION=18429
	I0318 04:18:39.254641   15483 notify.go:220] Checking for updates...
	W0318 04:18:39.254671   15483 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball: no such file or directory
	I0318 04:18:39.262579   15483 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:18:39.265579   15483 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:18:39.266816   15483 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:18:39.269573   15483 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	W0318 04:18:39.275496   15483 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0318 04:18:39.275697   15483 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:18:39.278453   15483 out.go:97] Using the qemu2 driver based on user configuration
	I0318 04:18:39.278469   15483 start.go:297] selected driver: qemu2
	I0318 04:18:39.278482   15483 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:18:39.278552   15483 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:18:39.281474   15483 out.go:169] Automatically selected the socket_vmnet network
	I0318 04:18:39.286861   15483 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0318 04:18:39.286955   15483 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 04:18:39.287053   15483 cni.go:84] Creating CNI manager for ""
	I0318 04:18:39.287071   15483 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0318 04:18:39.287125   15483 start.go:340] cluster config:
	{Name:download-only-382000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-382000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:18:39.291876   15483 iso.go:125] acquiring lock: {Name:mkb8143674083e0c7a46a3ed751b3800392bcd24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:18:39.295485   15483 out.go:97] Downloading VM boot image ...
	I0318 04:18:39.295502   15483 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/iso/arm64/minikube-v1.32.1-1710520390-17991-arm64.iso
	I0318 04:18:57.584226   15483 out.go:97] Starting "download-only-382000" primary control-plane node in "download-only-382000" cluster
	I0318 04:18:57.584254   15483 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0318 04:18:57.870373   15483 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0318 04:18:57.870489   15483 cache.go:56] Caching tarball of preloaded images
	I0318 04:18:57.871230   15483 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0318 04:18:57.876768   15483 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0318 04:18:57.876794   15483 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0318 04:18:58.504598   15483 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0318 04:19:20.811718   15483 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0318 04:19:20.811902   15483 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0318 04:19:21.509798   15483 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0318 04:19:21.509998   15483 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/download-only-382000/config.json ...
	I0318 04:19:21.510015   15483 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/download-only-382000/config.json: {Name:mk22c27bdb892f0dc2ab4a43abb8a08bd0f554e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:19:21.511110   15483 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0318 04:19:21.511299   15483 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0318 04:19:22.182258   15483 out.go:169] 
	W0318 04:19:22.187192   15483 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/18429-15072/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1087b7520 0x1087b7520 0x1087b7520 0x1087b7520 0x1087b7520 0x1087b7520 0x1087b7520] Decompressors:map[bz2:0x1400049e600 gz:0x1400049e608 tar:0x1400049e590 tar.bz2:0x1400049e5c0 tar.gz:0x1400049e5d0 tar.xz:0x1400049e5e0 tar.zst:0x1400049e5f0 tbz2:0x1400049e5c0 tgz:0x1400049e5d0 txz:0x1400049e5e0 tzst:0x1400049e5f0 xz:0x1400049e610 zip:0x1400049e620 zst:0x1400049e618] Getters:map[file:0x14000994820 http:0x140000fe2d0 https:0x140000fe320] Dir:false ProgressLis
tener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0318 04:19:22.187225   15483 out_reason.go:110] 
	W0318 04:19:22.195197   15483 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 04:19:22.199226   15483 out.go:169] 
	
	
	* The control-plane node download-only-382000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-382000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-382000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (22.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-509000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-509000 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=docker --driver=qemu2 : (22.338343125s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (22.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
--- PASS: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-509000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-509000: exit status 85 (81.110958ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-382000 | jenkins | v1.32.0 | 18 Mar 24 04:18 PDT |                     |
	|         | -p download-only-382000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 18 Mar 24 04:19 PDT | 18 Mar 24 04:19 PDT |
	| delete  | -p download-only-382000        | download-only-382000 | jenkins | v1.32.0 | 18 Mar 24 04:19 PDT | 18 Mar 24 04:19 PDT |
	| start   | -o=json --download-only        | download-only-509000 | jenkins | v1.32.0 | 18 Mar 24 04:19 PDT |                     |
	|         | -p download-only-509000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 04:19:22
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 04:19:22.872883   15519 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:19:22.873024   15519 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:19:22.873027   15519 out.go:304] Setting ErrFile to fd 2...
	I0318 04:19:22.873029   15519 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:19:22.873152   15519 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:19:22.874262   15519 out.go:298] Setting JSON to true
	I0318 04:19:22.890396   15519 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8335,"bootTime":1710752427,"procs":471,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:19:22.890490   15519 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:19:22.894572   15519 out.go:97] [download-only-509000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:19:22.898454   15519 out.go:169] MINIKUBE_LOCATION=18429
	I0318 04:19:22.894640   15519 notify.go:220] Checking for updates...
	I0318 04:19:22.905496   15519 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:19:22.908469   15519 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:19:22.911501   15519 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:19:22.914546   15519 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	W0318 04:19:22.920500   15519 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0318 04:19:22.920716   15519 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:19:22.923479   15519 out.go:97] Using the qemu2 driver based on user configuration
	I0318 04:19:22.923488   15519 start.go:297] selected driver: qemu2
	I0318 04:19:22.923492   15519 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:19:22.923553   15519 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:19:22.926497   15519 out.go:169] Automatically selected the socket_vmnet network
	I0318 04:19:22.931549   15519 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0318 04:19:22.931643   15519 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 04:19:22.931683   15519 cni.go:84] Creating CNI manager for ""
	I0318 04:19:22.931691   15519 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:19:22.931696   15519 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 04:19:22.931739   15519 start.go:340] cluster config:
	{Name:download-only-509000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-509000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:19:22.935886   15519 iso.go:125] acquiring lock: {Name:mkb8143674083e0c7a46a3ed751b3800392bcd24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:19:22.938479   15519 out.go:97] Starting "download-only-509000" primary control-plane node in "download-only-509000" cluster
	I0318 04:19:22.938487   15519 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:19:23.596595   15519 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:19:23.596665   15519 cache.go:56] Caching tarball of preloaded images
	I0318 04:19:23.597366   15519 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime docker
	I0318 04:19:23.602902   15519 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0318 04:19:23.602934   15519 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I0318 04:19:24.192080   15519 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4?checksum=md5:6fb922d1d9dc01a9d3c0b965ed219613 -> /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4
	I0318 04:19:42.544714   15519 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	I0318 04:19:42.544878   15519 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-docker-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-509000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-509000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-509000
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (20.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-180000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-180000 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=docker --driver=qemu2 : (20.568126792s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (20.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
--- PASS: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-180000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-180000: exit status 85 (79.631875ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-382000 | jenkins | v1.32.0 | 18 Mar 24 04:18 PDT |                     |
	|         | -p download-only-382000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 18 Mar 24 04:19 PDT | 18 Mar 24 04:19 PDT |
	| delete  | -p download-only-382000           | download-only-382000 | jenkins | v1.32.0 | 18 Mar 24 04:19 PDT | 18 Mar 24 04:19 PDT |
	| start   | -o=json --download-only           | download-only-509000 | jenkins | v1.32.0 | 18 Mar 24 04:19 PDT |                     |
	|         | -p download-only-509000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 18 Mar 24 04:19 PDT | 18 Mar 24 04:19 PDT |
	| delete  | -p download-only-509000           | download-only-509000 | jenkins | v1.32.0 | 18 Mar 24 04:19 PDT | 18 Mar 24 04:19 PDT |
	| start   | -o=json --download-only           | download-only-180000 | jenkins | v1.32.0 | 18 Mar 24 04:19 PDT |                     |
	|         | -p download-only-180000           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=docker        |                      |         |         |                     |                     |
	|         | --driver=qemu2                    |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 04:19:45
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 04:19:45.747883   15554 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:19:45.748003   15554 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:19:45.748007   15554 out.go:304] Setting ErrFile to fd 2...
	I0318 04:19:45.748009   15554 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:19:45.748148   15554 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:19:45.749221   15554 out.go:298] Setting JSON to true
	I0318 04:19:45.765487   15554 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8358,"bootTime":1710752427,"procs":470,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:19:45.765548   15554 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:19:45.770291   15554 out.go:97] [download-only-180000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:19:45.773124   15554 out.go:169] MINIKUBE_LOCATION=18429
	I0318 04:19:45.770380   15554 notify.go:220] Checking for updates...
	I0318 04:19:45.780115   15554 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:19:45.783186   15554 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:19:45.786132   15554 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:19:45.789185   15554 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	W0318 04:19:45.795183   15554 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0318 04:19:45.795393   15554 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:19:45.799152   15554 out.go:97] Using the qemu2 driver based on user configuration
	I0318 04:19:45.799160   15554 start.go:297] selected driver: qemu2
	I0318 04:19:45.799164   15554 start.go:901] validating driver "qemu2" against <nil>
	I0318 04:19:45.799221   15554 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 04:19:45.800680   15554 out.go:169] Automatically selected the socket_vmnet network
	I0318 04:19:45.806229   15554 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0318 04:19:45.806320   15554 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 04:19:45.806358   15554 cni.go:84] Creating CNI manager for ""
	I0318 04:19:45.806366   15554 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0318 04:19:45.806376   15554 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 04:19:45.806414   15554 start.go:340] cluster config:
	{Name:download-only-180000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-180000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:19:45.810582   15554 iso.go:125] acquiring lock: {Name:mkb8143674083e0c7a46a3ed751b3800392bcd24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 04:19:45.813205   15554 out.go:97] Starting "download-only-180000" primary control-plane node in "download-only-180000" cluster
	I0318 04:19:45.813211   15554 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0318 04:19:46.938091   15554 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0318 04:19:46.938163   15554 cache.go:56] Caching tarball of preloaded images
	I0318 04:19:46.938908   15554 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0318 04:19:46.943601   15554 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0318 04:19:46.943635   15554 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 ...
	I0318 04:19:47.538748   15554 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4?checksum=md5:ec278d0a65e5e64ee0e67f51e14b1867 -> /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4
	I0318 04:20:04.540138   15554 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 ...
	I0318 04:20:04.540293   15554 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-docker-overlay2-arm64.tar.lz4 ...
	I0318 04:20:05.095078   15554 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on docker
	I0318 04:20:05.095265   15554 profile.go:142] Saving config to /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/download-only-180000/config.json ...
	I0318 04:20:05.095283   15554 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18429-15072/.minikube/profiles/download-only-180000/config.json: {Name:mkd6dc468539eb8857b6e5d959466c237becb104 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 04:20:05.095532   15554 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime docker
	I0318 04:20:05.095657   15554 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18429-15072/.minikube/cache/darwin/arm64/v1.29.0-rc.2/kubectl
	
	
	* The control-plane node download-only-180000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-180000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-180000
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestBinaryMirror (0.34s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-986000 --alsologtostderr --binary-mirror http://127.0.0.1:53083 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-986000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-986000
--- PASS: TestBinaryMirror (0.34s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-118000
addons_test.go:928: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-118000: exit status 85 (60.705125ms)

                                                
                                                
-- stdout --
	* Profile "addons-118000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-118000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-118000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-118000: exit status 85 (64.409083ms)

                                                
                                                
-- stdout --
	* Profile "addons-118000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-118000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (9s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (9.00s)

                                                
                                    
x
+
TestErrorSpam/start (0.4s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-742000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-742000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-742000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000 start --dry-run
--- PASS: TestErrorSpam/start (0.40s)

                                                
                                    
x
+
TestErrorSpam/status (0.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-742000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-742000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000 status: exit status 7 (34.834458ms)

                                                
                                                
-- stdout --
	nospam-742000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-742000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-742000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-742000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000 status: exit status 7 (31.772ms)

                                                
                                                
-- stdout --
	nospam-742000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-742000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-742000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-742000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000 status: exit status 7 (31.351708ms)

                                                
                                                
-- stdout --
	nospam-742000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-742000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.10s)

                                                
                                    
x
+
TestErrorSpam/pause (0.13s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-742000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-742000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000 pause: exit status 83 (40.949792ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-742000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-742000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-742000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-742000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-742000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000 pause: exit status 83 (42.04825ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-742000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-742000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-742000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-742000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-742000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000 pause: exit status 83 (41.582958ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-742000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-742000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-742000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.13s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-742000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-742000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000 unpause: exit status 83 (41.931542ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-742000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-742000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-742000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-742000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-742000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000 unpause: exit status 83 (40.86425ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-742000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-742000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-742000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-742000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-742000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000 unpause: exit status 83 (40.710458ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-742000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-742000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-742000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.12s)

                                                
                                    
x
+
TestErrorSpam/stop (8.66s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-742000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-742000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000 stop: (2.110002541s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-742000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-742000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000 stop: (3.169560417s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-742000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-742000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-742000 stop: (3.380633041s)
--- PASS: TestErrorSpam/stop (8.66s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/18429-15072/.minikube/files/etc/test/nested/copy/15481/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (6.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-900000 cache add registry.k8s.io/pause:3.1: (2.11710825s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-900000 cache add registry.k8s.io/pause:3.3: (2.143008584s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-900000 cache add registry.k8s.io/pause:latest: (1.787283125s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (6.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-900000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local2140412226/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 cache add minikube-local-cache-test:functional-900000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 cache delete minikube-local-cache-test:functional-900000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-900000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 config get cpus: exit status 14 (34.294625ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 config get cpus: exit status 14 (32.625375ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-900000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-900000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (156.694417ms)

                                                
                                                
-- stdout --
	* [functional-900000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:22:00.241098   16176 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:22:00.241241   16176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:22:00.241245   16176 out.go:304] Setting ErrFile to fd 2...
	I0318 04:22:00.241248   16176 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:22:00.241417   16176 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:22:00.242603   16176 out.go:298] Setting JSON to false
	I0318 04:22:00.261466   16176 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8493,"bootTime":1710752427,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:22:00.261529   16176 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:22:00.265583   16176 out.go:177] * [functional-900000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0318 04:22:00.271060   16176 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 04:22:00.275539   16176 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:22:00.271102   16176 notify.go:220] Checking for updates...
	I0318 04:22:00.278547   16176 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:22:00.281458   16176 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:22:00.284522   16176 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	I0318 04:22:00.287567   16176 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:22:00.289312   16176 config.go:182] Loaded profile config "functional-900000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:22:00.289586   16176 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:22:00.293563   16176 out.go:177] * Using the qemu2 driver based on existing profile
	I0318 04:22:00.300412   16176 start.go:297] selected driver: qemu2
	I0318 04:22:00.300418   16176 start.go:901] validating driver "qemu2" against &{Name:functional-900000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:functional-900000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:22:00.300500   16176 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:22:00.307538   16176 out.go:177] 
	W0318 04:22:00.311600   16176 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0318 04:22:00.315519   16176 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-900000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-900000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-900000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (117.973917ms)

                                                
                                                
-- stdout --
	* [functional-900000] minikube v1.32.0 sur Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 04:22:00.466766   16187 out.go:291] Setting OutFile to fd 1 ...
	I0318 04:22:00.466872   16187 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:22:00.466875   16187 out.go:304] Setting ErrFile to fd 2...
	I0318 04:22:00.466877   16187 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 04:22:00.467007   16187 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18429-15072/.minikube/bin
	I0318 04:22:00.468411   16187 out.go:298] Setting JSON to false
	I0318 04:22:00.485148   16187 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8493,"bootTime":1710752427,"procs":476,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0318 04:22:00.485222   16187 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0318 04:22:00.489437   16187 out.go:177] * [functional-900000] minikube v1.32.0 sur Darwin 14.3.1 (arm64)
	I0318 04:22:00.497562   16187 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 04:22:00.497671   16187 notify.go:220] Checking for updates...
	I0318 04:22:00.501561   16187 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	I0318 04:22:00.505519   16187 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0318 04:22:00.512484   16187 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 04:22:00.516532   16187 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	I0318 04:22:00.519586   16187 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 04:22:00.522740   16187 config.go:182] Loaded profile config "functional-900000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.28.4
	I0318 04:22:00.523000   16187 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 04:22:00.527557   16187 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0318 04:22:00.534527   16187 start.go:297] selected driver: qemu2
	I0318 04:22:00.534534   16187 start.go:901] validating driver "qemu2" against &{Name:functional-900000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.4 ClusterName:functional-900000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 04:22:00.534590   16187 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 04:22:00.541551   16187 out.go:177] 
	W0318 04:22:00.545550   16187 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0318 04:22:00.548549   16187 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
functional_test.go:2284: (dbg) Done: out/minikube-darwin-arm64 license: (1.439629834s)
--- PASS: TestFunctional/parallel/License (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (5.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (5.547334375s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-900000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (5.59s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-900000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 image rm gcr.io/google-containers/addon-resizer:functional-900000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-900000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 image save --daemon gcr.io/google-containers/addon-resizer:functional-900000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-900000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "73.32525ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "36.675209ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "71.248292ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "35.719ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.012671167s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-900000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.15s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.18s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-900000
--- PASS: TestFunctional/delete_addon-resizer_images (0.18s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-900000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-900000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.53s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-510000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-510000 --output=json --user=testUser: (3.52696875s)
--- PASS: TestJSONOutput/stop/Command (3.53s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.32s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-074000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-074000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (97.809583ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e797db5f-eaba-4afb-8fd6-171ef75668cc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-074000] minikube v1.32.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9a2bdcdd-497a-4ede-a724-bef7f89787b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18429"}}
	{"specversion":"1.0","id":"8da1a733-4fba-43f8-a8c3-7405d8715a4d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig"}}
	{"specversion":"1.0","id":"e7f36a9c-1894-4aa0-9d1b-93121d8178d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"487b830e-2cc7-4bc3-9273-7e8cf6690196","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6e17bbf4-e88b-4b80-8c26-10a2c85a74fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube"}}
	{"specversion":"1.0","id":"18d4ebdf-87b1-4be6-aa52-cc0479160fb0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"de87a451-18cd-496a-a56a-8d8eeac4a08e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-074000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-074000
--- PASS: TestErrorJSONOutput (0.32s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (6.02s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (6.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-654000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-654000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (100.620125ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-654000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=18429
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18429-15072/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18429-15072/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-654000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-654000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (44.23425ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-654000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-654000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-arm64 profile list: (15.73768775s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-arm64 profile list --output=json: (15.655420417s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-654000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-654000: (3.142916459s)
--- PASS: TestNoKubernetes/serial/Stop (3.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-654000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-654000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (43.773041ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-654000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-654000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.81s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-126000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-421000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-421000 --alsologtostderr -v=3: (3.453211166s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-421000 -n old-k8s-version-421000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-421000 -n old-k8s-version-421000: exit status 7 (40.845417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-421000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-204000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-204000 --alsologtostderr -v=3: (3.423006167s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-204000 -n no-preload-204000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-204000 -n no-preload-204000: exit status 7 (56.135916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-204000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-177000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-177000 --alsologtostderr -v=3: (3.28554925s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-177000 -n embed-certs-177000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-177000 -n embed-certs-177000: exit status 7 (36.128666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-177000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (2.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-103000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-103000 --alsologtostderr -v=3: (2.960745958s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (2.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-103000 -n default-k8s-diff-port-103000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-103000 -n default-k8s-diff-port-103000: exit status 7 (57.666708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-103000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-256000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-256000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-256000 --alsologtostderr -v=3: (3.16736575s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-256000 -n newest-cni-256000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-256000 -n newest-cni-256000: exit status 7 (62.207958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-256000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (24/266)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (12.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-900000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3031823387/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1710760884226202000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3031823387/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1710760884226202000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3031823387/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1710760884226202000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3031823387/001/test-1710760884226202000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (58.500083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.160125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.688ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.126208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.189542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.894541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.7095ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (83.90525ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 ssh "sudo umount -f /mount-9p": exit status 83 (50.498084ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-900000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-900000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port3031823387/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (12.97s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (11.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-900000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2759017973/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (63.465416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.445ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.821916ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.108792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (92.848542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (87.568375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.658917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 ssh "sudo umount -f /mount-9p": exit status 83 (48.711709ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-900000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-900000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port2759017973/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (11.04s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (11.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-900000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4040824252/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-900000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4040824252/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-900000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4040824252/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 ssh "findmnt -T" /mount1: exit status 83 (78.935167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 ssh "findmnt -T" /mount1: exit status 83 (83.755583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 ssh "findmnt -T" /mount1: exit status 83 (85.457125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 ssh "findmnt -T" /mount1: exit status 83 (90.095042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 ssh "findmnt -T" /mount1: exit status 83 (86.615542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 ssh "findmnt -T" /mount1: exit status 83 (86.605042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-900000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-900000 ssh "findmnt -T" /mount1: exit status 83 (91.059833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-900000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-900000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-900000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4040824252/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-900000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4040824252/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-900000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4040824252/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (11.93s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-360000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-360000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-360000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-360000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-360000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-360000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-360000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-360000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-360000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-360000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-360000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-360000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-360000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-360000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-360000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-360000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-360000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-360000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-360000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-360000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-360000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-360000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-360000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-360000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-360000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-360000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-360000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-360000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-360000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-360000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-360000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-360000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-360000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-360000"

                                                
                                                
----------------------- debugLogs end: cilium-360000 [took: 2.252728917s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-360000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-360000
--- SKIP: TestNetworkPlugins/group/cilium (2.49s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-223000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-223000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.23s)

                                                
                                    
Copied to clipboard